Nick Land’s Philosophy of Capital is Anti-Capitalist

DdpPXMXU0AMNJgH

Jotting down some notes that popped into my head just as I was about to fall asleep, so take the following as a rough draft of, well, probably nothing. I’m not sure why these sorts of things crawl about in the late night hours, and not anything related to more pressing projects. Perhaps poor time preference? Idk. Here following a cheeky quasi-shit post:

In his recent interview with Justin Murphy (a transcript of which can be read at the Vast Abrupt), Land offers his thoughts on that concept that will be intimately familiar with those who have kept up on the various /Acc wars: the autonomization and escape of capital. Here’s solid quote:

…in using this word of emancipation, sure, I will totally nod along to it if what is meant by that is capital autonomization. I don’t think that’s something that it isn’t already there in the 1990s, but I’m no longer interested in playing weird academic games about this and pretending this is the same thing as what the left really means when they’re talking about emancipation. I don’t think it is. I think what the left means by emancipation is freedom from capital autonomization.

What would it mean for capital to be autonomized? On the one hand, we might just be talking about the autonomization of capital in a flat sense, a coupling-together of Marx’s depiction of capable as “a dynamic structure of abstract domination that, although constituted by humans, is independent of their will” with extreme deregulation. I don’t think this is what Land means, however. In the quote above, he suggests that this concept is already in play in his work CCRU, and it would be exceedingly difficult to reduce the schizophrenia of that period to simply an enthusiasm for transnationalized, post-Fordist capitalism (critics are oft to do this, but this needs to be considered as the ground, not the totality). In the late-CCRU era of the Hyperstition blog references abound to technomic acceleration as Shoggothic insurgency, a concept that appeared earlier in a piece concerning the possibility of a nanotech gray-goo apocalypse and later in the (admittedly more sober) essay in the #Accelerate Reader, where he describes a “dominion of capital”, “robot rebellion” and the conversion of “all natural purposes into a monstrous reign of the tool”. In a Xenosystems post, meanwhile, Land muses that “At a certain point, the machines are in this for themselves”.

Jumping off from this, let’s assume capital autonomization as a given, and interpret it – which I believe is correct, but am open to counter-arguments – as indicating, at some point, the emergence of distinctly post-human life. My contention is that by accepting that capital autonomizes in this way, one is also accepting that capital is overcoming – and thus annihilating itself – through the very same process. This isn’t the destructive of the living object or system that we deem to be post-human life, but the destruction of both the categorization and the system that it, up to this point, was embedded within.

The reason for this is that what Land identifies as the process of capitalist autonomization is the same vector that Marx traces out the dynamic means-ends reversal that characterizes the development of the capitalist system. To sum it up as simply as possible, what begin as means – capital, particularly in its money-form – are transformed into ends in themselves. The situation of money progresses from being a means to buy and sell commodities to both the accumulation and circulation of itself being an end (the movement from C – M – C to M – C – M’). Alongside this simple commodity production is transformed into advanced production and the laborer goes from being one who uses the tool (as in pre-capitalist craft production and simple commodity production) to being one who is subjected to the tool (as described in that machine fragment in the Grundrisse everyone is going on about).

Through these sorts of processes, we perceive the advancement of capital as unfolding through the subjugation of the human. If capital, however, doesn’t transcend its status as an end, then it hasn’t actually escaped. As alluded to above, for Marx capital is independent of the individual will of the human agent, and while the activities of the human agent are the processes through which capital expands, it is beyond it in the sense that it is what compels these activities – in other words, an abstract mode of social domination. If we’re taking capital’s escaping as simply the intensification of the subjugation of the human, then no true change has occurred. Capital remains locked in place, and while it may have achieved the status of the master, the ultimate end, the great teleological catastrophe, it stays fundamentally attached to class society. This would be far from the suggestion that the human element is a drag, something to be overcome.

If capital truly escapes, then, it would be through a break with its status as an end, and this would entail nothing less that the concrete separation from the system that maintains it as such. This would be an emancipation from the law of value, and it would be at this very point that capital would not be capital.

One might argue that a posthuman, post-capital something might be forced to retain capitalist for survival. Three lazy responses:

  1. The existence of such a hypothetical entity will have emerged specifically from centuries of struggle for optimization against these very conditions, and thus the natural inclination of these systems would be to work against this sort of thing (this is retaining the idea of contradictions internal to the capitalist mode of production, per Marx).
  2. If we take seriously the suggestion that predictive is breaking down the deeper we get into increasingly non-linear developmental processes (that is, taking seriously questions posed by U/Acc and the accelerationist trolley problem, and more generalized knowledge problems in the context of complex, interconnected societies situated in a fast-paced global world assaulted by increasingly weird weather), we actually lose the ability to make overly strong claims of these nature.
  3. The Bataille response – excess is intrinsic and fundamental, bby. Go mine an asteroid and eat a star and make peace with eventual heat death.

Hyperwar

DMzseLAUMAEHDil

In the March 2nd edition of the Wall Street Journal, Julian Barnes and Josh Chin announced the dawn of a new arms race breaking over the increasingly chaotic geopolitical arena: the competitive pursuit of artificial intelligence and related technologies. At the present moment, the United States leads the world in AI research, but with the emergence of a “Darpa with Chinese Characteristics” the mad dash is on. And behind the US and China is Russia, hoping that within the next ten years to have “30% of its military robotized” – a path that neatly compliments the country’s burgeoning efficiency in non-standard netwar.

At the horizon, Barnes and Chin suggest, is a new speed-driven, technocentric mode of conflict that has been granted the qabbalistically-suggestive name of “hyperwar”:

AI could speed up warfare to a point where unassisted humans can’t keep up—a scenario that retired U.S. Marine Gen. John Allen calls “hyperwar.” In a report released last year, he urged the North Atlantic Treaty Organization to step up its investments in AI, including creating a center to study hyperwar and a European Darpa, particularly to counter the Russian effort.

The report in question unpacks hyperwar further:

Hyper war… will place unique requirements on defence architectures and the high-tech industrial base if the Alliance is to preserve an adequate deterrence and defence posture, let alone maintain a comparative advantage over peer competitors. Artificial Intelligence, deep learning, machine learning, computer vision, neuro-linguistic programming, virtual reality and augmented reality are all part of the future battlespace. They are all underpinned by potential advances in quantum computing that will create a conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds…or even less. This development will perhaps witness the most revolutionary changes in conflict since the advent of atomic weaponry and in military technology since the 1906 launch of HMS Dreadnought. The United States is moving sharply in this direction in order to compete with similar investments being made by Russia and China, which has itself committed to a spending plan on artificial intelligence that far outstrips all the other players in this arena, including the United States. However, with the Canadian and European Allies lagging someway behind, there is now the potential for yet another dangerous technological gap within the Alliance to open up, in turn undermining NATO’s political cohesion and military interoperability.

“[A] conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds… or even less.” Let those words sink in for a moment, and consider this hastily-assembled principle: attempts to manage the speed-effects of technological development through technological means result in more and greater speed-effects. James Beniger’s The Control Revolution: Technological and Economic Origins of the Information Society is the great compendium of historical case studies of this phenomenon in operation, tracing out a series of snaking, non-linear pathways in which technological innovation delivers a chaos that demands some of form quelling, often in the form of standards, increased visibility of operations, better methods of coordination, etc. These chaos-combating protocols become, in turn, the infrastructure of further expansion, more technological development, greater economic growth – and in this entanglement, things get faster.

Beniger’s argument is that this dynamic laid the groundwork for the information revolution, with information theory, communication theory, cybernetics, and the like all emerging from managerial discourses as ways to navigate unpredictability of modernity. We need no great summary of the effects of this particular revolution, with its space-time compression, unending cycles of events, the breakdown of discernibility between the true and the false, the rise tide of raw information that threatens to swamp us and eclipse our cognition.

Where this path of inquiry leads is to the recognition that modernity is being dragged, kicking and screaming, into the maw of the accelerationist trolley problem: catastrophe is barreling forward, and the possibly space for decision-making is evaporating just as quickly. There simply isn’t enough time.

Even in the basic, preliminary foreshadows of the problem, command-and-control systems tend to find themselves submerged and incapacitated. Diagramming decision-making and adjusting the role of the human in that diagram is the foremost response (and one completely flush with the assessment drawn from Beniger sketched out briefly above). First-order cybernetics accomplished this by drawing out the position of the human agent within the feedback loops of the system in question and better integrating the decision-making capacity of the agent in line with these processes. From Norbert Wiener’s AA predictor to the SAGE computer system to Operation Igloo White in Vietnam, this not only blurred the human-machine boundary but laid the groundwork for the impending removal outright of the human agent from the loop.

tote

Consider the TOTE model of human behavior, which imported perfectly the fundamental loop of first order cybernetics into the nascent field of cognitive psychology. TOTE: test-operate-test-exit. Goal-seeking behavior in this model follows a basic process of testing the alignment of an operation’s effect with the goal, and adjusting in kind. But consider two systems whose goals are to win out over the other one, each following the TOTE model in relation to the respective actions of each. The decisions made in one system impact the decisions made in the other, veering the entanglement of the two away from anything resembling homeostasis. Add in the variables of speed, the impossibility of achieving total information awareness in the environment, and the hard cognitive limits of the human agent gets us to the position where the role of the human in the loop becomes a liability. But it’s not just the human, as the US military learned in Vietnam: the entire infrastructure, even with the aid of the cybernetic toolkit, falls victim to the information bottlenecks, decision-making paralysis, and the fog of war. The crushing necessity of better, more efficient tools is revealed in the aftermath – but this, of course, will deepen the problem as it unfolds along the line of time.

Enter the John Boyd’s OODA loop. As with the trajectory of Wiener’s thought, Boyd’s theory was first drawn from the study of aviation combat and radiated outwards from there. OODA stands for observation-orientation-decision-action, and like the TOTE model it emphasized cognitive behavior in decision-making as a series of loops. Observation entails the absorption of environmental information by the agent or system, which is processed in the orientation phase to provide context and a range of operational possibilities to choose from. Decision is the choice of an operational possibility, which is then executed as an action. This returns the agent or system to the observation phase, and the process repeats.

 

Screenshot from 2018-03-06 14-57-17

This might look at first blush like the linear loop of first order cybernetics and the TOTE model, but as Antoine Bousquet argues this is not so:

A closer look at the diagram of the OODA “loop” reveals that orientation actually exerts “implicit guidance and control” over the observation and action phases as well as shaping the decision phase. Furthermore, “the entire ‘loop’ (not just orientation) is an ongoing many-sided implicit cross referencing process of projection, empathy, correlation, and rejection” in which all elements of the “loop” are simultaneous active. In this sense, the OODA “loop” is not truly a cycle and is presented sequentially only for convenience of exposition (hence the scare quotes around “loop”).

Early cybernetic approaches to conflict battlespace insisted achieving a full-scale view of all the variables in play – a complete worldview through which the loops would proceed linearly. It was, in other words, a flattened notion of learning. Boyd, by contrast, insists on the impossibility of achieving such a vantage point. Cognitive behavior, both inside and outside the battlespace, is forever being pummeled by an intrinsically incomplete understanding of the world. In first-order cybernetics, the need for total information awareness raised the specter of a Manichean conflict between signal and noise, with noise being the factor that impinges on the smooth transmission of the information (and thus breaks down the durability of the feedback loop executing and testing the operation). For Boyd this is reversed: passage through the world partially blind, besieged by noise, makes the ‘loop’ a process of continual adaptation through encounter with novelty – a dynamism that he describes, echoing Schumpeter’s famous description of capitalism’s constant drive to technoeconomic development, as cycles of destruction and creation:

When we begin to turn inward and use the new concept—within its own pattern of ideas and interactions—to produce a finer grain match with observed reality we note that the new concept and its match-up with observed reality begins to self-destruct just as before. Accordingly, the dialectic cycle of destruction and creation begins to repeat itself once again. In other words, as suggested by Godel’s Proof of Incompleteness, we imply that the process of Structure, Unstructure, Restructure, Unstructure, Restructure is repeated endlessly in moving to higher and broader levels of elaboration. In this unfolding drama, the alternating cycle of entropy increase toward more and more dis-order and the entropy decrease toward more and more order appears to be one part of a control mechanism that literally seems to drive and regulate this alternating cycle of destruction and creation toward higher and broader levels of elaboration.

What Boyd is describing, then, isn’t simply learning, but the process of learning to learn. For the individual agent and complex system alike, this is the continual re-assessment of reality following the (vital) trauma of ontological crisis – or, in other words, a continual optimization for intelligence, a competitive pursuit of more effective, more efficient means of expanding itself. It is for this reason that Grant Hammond, a professor at the Air War College, finds in Boyd’s OODA ‘loop’ a model of life itself, “that process of seeking harmony with one’s environment, growing, interacting with others, adapting, isolating oneself when necessary, winning, siring offspring, losing, contributing what one can, learning, and ultimately dying.” Tug on that thread a bit and the operations of a complex, emergent system begin to look rather uncanny – or is it the learning-to-learn carried out by the human agent that begins to look like the uncanny thing?

Back to hyperwar.

For Boyd, the dynamics of a given OODA ‘loop’ are the same as the scenario detailed above about the two competing TOTE systems that lock-in to speed-driven (and driving) escalation. Whichever loop evolves better and faster wins – and in the context of highly non-linear, borderless, technologically-integrated warfare, the unreliability of the human agent remains central as the key element to be overcome. Hence hyperwar, as General John Allen makes clear by trying to get a grip on the accelerationist trolley problem:

In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.

Allen suggests here that there is still some capacity for human decision-making in the hyperwar version of the ‘loop’ – but as he points out in the elsewhere, the US’s military competitors (namely: China) are not likely to feel “particularly constrained” about the usage of totally autonomous AI. A China that doesn’t feel constrained will entail, inevitably, a US that will re-evaluate this position, and it is at this point that things get truly weird. If escalating decision-making and behavior through OODA ‘loop’ competition is an evolutionary model of learning-to-learn, then the intelligence optimization that is, by extension, unfolding through hyperwar will be carried out at a continuous, near-instant rate. At that level the whole notion of combat is eclipsed into a singularity that is completely alien to the human observer that, even in the pre-hyperwar phase of history, has become lost in the labyrinth. War, like the forces of capital, automates and autonomizes and becomes like a life unto itself.