Against degrowth
There's been a bit of recent criticism of 'TESCREAL' from a degrowth perspective, particularly by Emile Torres. I'm less interested in their thoughts on the matter, but they recommended a graphic novel called Who's Afraid of Degrowth? which I read part of and found interesting. It has the sort of reasoning that looks sound if you squint: the sort you put in an essay you don’t really believe in the conclusion of. E.g. “not the only factor” is transformed into “not an important factor” in an inference from a study. It's possible that I have a higher bar for argumentation than most, but I also think a high bar is generally correct.
It also feels like the book is written in a world where you can basically freely choose the exact way you want an economic system to work. It specifies what the system would do, but not how that happens. It tends to reference indigenous practice, implying something like that structure would work well. I'm really uncertain about the ability for small, local coordination mechanisms to scale to a global population of at least millions.
Still, even if the work isn't argued well, we can use it to understand the author's ideas.
It's instructive to read Nick Land to get at least a basic grip on some of the larger forces that "growth" related political movements might be talking about.
Machinic desire can seem a little inhuman, as it rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless tropism to zero control. This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.
― Nick Land, Fanged Noumena
According to Land, we've accidentally constructed a misaligned superintelligence that we've constructed for ourselves. If let to continue, the selection pressures latent in capitalism, evolution, or almost any process will push towards something like entropy-maximization. Absolutely grabbing absolute power for its own sake.
(Well, that's not the whole story. But just give me a second...)
The core argument of degrowth, as stated in roughly their own terms, is that: Capitalism is out of our control and is destroying all that we value. We need to wrench back control of the future by halting it at all costs. This should sound familiar. Degrowthers essentially see what Nick Land is seeing. They see only a partial view of it, through the lens of environmentalism, political alienation, and economic inequality, but they do see it.
The degrowthers' argument initially struck me as surprisingly reasonable. The way you succeed in wielding an unaligned superintelligence is by using it to quickly seize as much value as you can and then either: a) decoupling your mechanism for growth from your mechanism for value, or b) stopping growth and sharply turning into the sort of society which maintains its values over time. And so the highly compressed degrowther thesis is that we should do (b) right now, since the misaligned superintelligence is already very scary and destructive, and things will only get worse. [1]
This isn't exactly desirable, since it effectively gives up on capturing almost all possible value in the universe. It initially looks like the only path to recover values, to preserve the human mesaoptimizer through massive growth, is to build some sort of singleton. Being unitary, the mind would not be subject to the sorts of internal selection pressures currently pushing humanity towards the kill/consume/multiply/conquer basin. [2] Ideally, we could define a mechanism for search (i.e. agency and epistemics) which is agnostic to possible values. It's unclear if this is actually possible, since mind-structure may be highly related to values.
But throughout all of this, I've been describing selection as an inevitable consequence of natural processes. But we are not nature, and if I remember correctly, we're trying to defeat nature and its injustices. After all, all of this - including Nick Land's 'alien god' is essentially our doing: yes, the misaligned superintelligence from the future arises from convergent incentives, but we can always choose to do something different. Ideally, we would use timeless decision theories to recover singleton-like agency from distributed action. That is, we could simply choose (via a timeless decision theory) to coordinate in such a way that our values are propagated indefinitely far into the future.
(Um, maybe. I'm not totally sure this works out correctly, and I don't have a fully rigorous view of it. This feels almost as vague as Scott Alexander's ending to Meditations on Moloch. But I think timeless decision theory represents a real divergence from Nick Land's worldview, which is maybe directly, naturally derived as the final consequence of a universe where only causal decision theory works.)
Degrowth is sort of grasping towards the hope I'm presenting here, with its ideas of universal cooperation and participation, but it simultaneously has a complete lack of hope regarding higher possibilities. I'm not entirely sure why this is, I think it might vary between:
- Since it correctly identifies that humans created our current problems, it splits and identifies nature with good and humanity with evil.
- Straightforward negative utilitarianism.
- Noticing that seemingly-hopeful attempts to attack nature have summoned a misaligned intelligence, and deciding against attacking nature in the future.
- Cope to avoid seeing the full scale of what we're up against.
- A natural consequence of having values that aren't robust to optimization.
...but still, if we can cooperate across all of society to bring about our values, why not use that same energy to defeat death? Or grant ourselves the power to upend the unjust natural order?
Footnotes
[1] I’m not even sure degrowth actually succeeds at b. In particular, there are more subtle selection effects (like biological and cultural evolution) which will still push the world towards misalignment with our current values.
[2] This is dubious. In particular, I'd be especially suspicious of selection effects not applying under self-improvement. If we do in fact separate out agency and values, we could likely keep values constant while improving agency, but again I find such a separation scheme unlikely to work. Also, defining a static value function right now without updating it as intelligence scales seems potentially very scary. (Maybe not! I think it's quite possible that our values are cleanly specified right now, but I'm very worried about locking in misspecified ones.) We would likely want our agent to be able to grow and improve with time, but there doesn't seem to be a way to do that without giving Nick Land's god room to tip the scales. (But also, maybe that's what we want? I remain confused on this topic.)