Hyperwar (#2: Further Thoughts)

Screenshot from 2018-03-07 12-26-02

Some follow-up thoughts to yesterday’s post on Hyperwar

In response to the scenario outlined by General John Allen, in which the United State practices restraint by keeping (minimal) human decision-making in the OODA ‘loop’ and China does not, DMF asks an important question: “why wouldn’t China feel constrained”. Why indeed? One response would be that China, as a bold emergent superpower, would feel pressured to development hyperwar capabilities to their fullest extent because it is operating without complete knowledge of what its geopolitical opponents are up to – which is why it seems likely, in my opinion, that the US’s professed restraint would slacken quickly in the scenario that hyperwar technologies are achieved. The nuclear arms race between the US and the Soviet Union didn’t see the stockpiling of incomplete weapons – it saw the continuous development of weapon systems with the hope that simply having these systems would prevent the ultimate, final conflict. Optimizing the means of apocalypse guarantees sovereignty (and shifts the terrain of conflict elsewhere).

This brings to mind two different scenarios that, while opposed, are not necessarily mutually exclusive in long-term thinking – under the precondition that hyperwar capabilities are actually achieved. They are:

1) Hyperwar goes ‘live’, a point at which the question of whether or not the human stays in the OODA ‘loop’ is a moot point due to the rapidly-escalating speed of the conflict. The opposing sides will have no choice but to race to the point where the human is squeezed out – and when this occurs, the scenario speculated about at the end of yesterday’s post becomes a reality. Live hyperwar puts (human) civilization on a fast-track to Doom in the form of a Skynet scenario.

2) Hyperwar capabilities are reached (the human factor is an aside at this point), but the specter of what live hyperwar guarantees foregrounds it as a deterrent. This draws from with Deadliner’s insights on the future necessity of the possibility of “Malevolent AI” (MAI) – that is, AI that “can negatively affect human activities and in the worst case cause the complete obliteration of the human species” – on securing sovereignty in the face of harsh geopolitical competition.

This brings us to two additional, opposing sub-scenarios which tie directly into a hot topic of discussion in the accelerationist and NRx spheres: patchwork and exit._

2A) Hyperwar-as-deterrence ushers in a new global order based on intensified political fragmentation and production of sovereign units.

2B) Hyperwar-as-deterrence curbs the ability for fragmentation of this sort to occur and locks-in the current geopolitical arena and its competitors.

Scenario 2A is the path of X-Risk Democratization, the position staked out by Land and others of the technocommercialist lean. An example of this dynamic already in action is the actions taken by North Korea in developing their nuclear capacities in the face of international opposition. While the specter of war raised its head repeatedly, it has averted (for now, at least) and the regime gained precisely what it set out to do: secure itself, and gain better seats at the negotiating table. This is the consolidation of a sovereign unit, and it is predicated on technologies whose cost and availability seems to fall over time. Thus for Land, x-risk democratization points towards an even greater diffusion of the ability to gain these capabilities right to point where sovereign units are able to multiply and protect themselves.

Nukes would do it. They’re certainly going to be democratized, in the end. There are probably far more remarkable accelerating WMD capabilities, though. In almost every respect (decentralized production capability, development curve, economy, impact …) bioweaponry leaves nukes in the dust. Anyone with a billion dollars, a serious grudge, and a high-end sociopathy profile could enter into a global biowarfare-threat game within a year. Everything could be put together in secret garages. Negotiations could be conducted in secure anonymity. Carving sovereignty out of the game would require only resources, ruthlessness, brilliance, and nerves. Once you can credibly threaten to kill 100,000,000 people all kinds of strategic opportunities are open. The fact no one has tried this yet is mostly down to billionaires being fat and happy. It only takes one Doctor Gno to break the pattern.

Scenario 2B would raise the counterpoint that while yes, techno-economic trends will make ease in securing pre-hyperwar and hyperwar-grade technologies accessible, the current major geopolitical actors already have a leg-up in the already-existing arms race. Simply put: they will get there before others – and if they get there first, that threat can be leveraged against would-be secessionists.

The debate between Scenario 2A and 2B must be left open-ended, as counterpoints and counter-scenarios to each rapidly multiply, especially when measured against time-tables. A conversation this morning about this with Mantis and Schwund dug into some of these issues. A few snippets:

  • Mantis: [in reference to the aforementioned example of North Korea] hyperwar will be much quicker to proliferate imho as the pathways open to it are more numerous. like right now you can keep a country from getting a centrifuge and shut down their nuclear development capacity?
  • Schwund: but isn’t hyperwar capacity in the hands of superpowers so fundamentlly game-changing that smaller nations acquiring similar things isn’t quite as easy as them getting nukes? like, such a smaller nation would have to employ a LOT of supterfuge, after all what it’s trying to trick is no longer a human governemnt but a mechanism that may ‘decide’ to swat it just to reduce risk. like, once one nation has that capacity, it has such an advantage in quick response that a nation that still has to get there, let alone from an inferior position, would be hopelessly outpaced
  • Mantis: that’s a very good point, i was for some reason assuming the kind of lock in we have now, in which a country can covertly develop an arsenal. but of course in hyperwar conditions the second an enemy’s capacity to inflict hyperwar in response increases they would likely be wiped out
  • Schwund: yeah, unless they’re china or russia. tbs, complete global surveillance is hard
  • Mantis: global is for sure, but I assume we will see near-complete surveillance and control lock in to urban development modes and spread from the city out along transit lines
Advertisements

Hyperwar

DMzseLAUMAEHDil

In the March 2nd edition of the Wall Street Journal, Julian Barnes and Josh Chin announced the dawn of a new arms race breaking over the increasingly chaotic geopolitical arena: the competitive pursuit of artificial intelligence and related technologies. At the present moment, the United States leads the world in AI research, but with the emergence of a “Darpa with Chinese Characteristics” the mad dash is on. And behind the US and China is Russia, hoping that within the next ten years to have “30% of its military robotized” – a path that neatly compliments the country’s burgeoning efficiency in non-standard netwar.

At the horizon, Barnes and Chin suggest, is a new speed-driven, technocentric mode of conflict that has been granted the qabbalistically-suggestive name of “hyperwar”:

AI could speed up warfare to a point where unassisted humans can’t keep up—a scenario that retired U.S. Marine Gen. John Allen calls “hyperwar.” In a report released last year, he urged the North Atlantic Treaty Organization to step up its investments in AI, including creating a center to study hyperwar and a European Darpa, particularly to counter the Russian effort.

The report in question unpacks hyperwar further:

Hyper war… will place unique requirements on defence architectures and the high-tech industrial base if the Alliance is to preserve an adequate deterrence and defence posture, let alone maintain a comparative advantage over peer competitors. Artificial Intelligence, deep learning, machine learning, computer vision, neuro-linguistic programming, virtual reality and augmented reality are all part of the future battlespace. They are all underpinned by potential advances in quantum computing that will create a conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds…or even less. This development will perhaps witness the most revolutionary changes in conflict since the advent of atomic weaponry and in military technology since the 1906 launch of HMS Dreadnought. The United States is moving sharply in this direction in order to compete with similar investments being made by Russia and China, which has itself committed to a spending plan on artificial intelligence that far outstrips all the other players in this arena, including the United States. However, with the Canadian and European Allies lagging someway behind, there is now the potential for yet another dangerous technological gap within the Alliance to open up, in turn undermining NATO’s political cohesion and military interoperability.

“[A] conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds… or even less.” Let those words sink in for a moment, and consider this hastily-assembled principle: attempts to manage the speed-effects of technological development through technological means result in more and greater speed-effects. James Beniger’s The Control Revolution: Technological and Economic Origins of the Information Society is the great compendium of historical case studies of this phenomenon in operation, tracing out a series of snaking, non-linear pathways in which technological innovation delivers a chaos that demands some of form quelling, often in the form of standards, increased visibility of operations, better methods of coordination, etc. These chaos-combating protocols become, in turn, the infrastructure of further expansion, more technological development, greater economic growth – and in this entanglement, things get faster.

Beniger’s argument is that this dynamic laid the groundwork for the information revolution, with information theory, communication theory, cybernetics, and the like all emerging from managerial discourses as ways to navigate unpredictability of modernity. We need no great summary of the effects of this particular revolution, with its space-time compression, unending cycles of events, the breakdown of discernibility between the true and the false, the rise tide of raw information that threatens to swamp us and eclipse our cognition.

Where this path of inquiry leads is to the recognition that modernity is being dragged, kicking and screaming, into the maw of the accelerationist trolley problem: catastrophe is barreling forward, and the possibly space for decision-making is evaporating just as quickly. There simply isn’t enough time.

Even in the basic, preliminary foreshadows of the problem, command-and-control systems tend to find themselves submerged and incapacitated. Diagramming decision-making and adjusting the role of the human in that diagram is the foremost response (and one completely flush with the assessment drawn from Beniger sketched out briefly above). First-order cybernetics accomplished this by drawing out the position of the human agent within the feedback loops of the system in question and better integrating the decision-making capacity of the agent in line with these processes. From Norbert Wiener’s AA predictor to the SAGE computer system to Operation Igloo White in Vietnam, this not only blurred the human-machine boundary but laid the groundwork for the impending removal outright of the human agent from the loop.

tote

Consider the TOTE model of human behavior, which imported perfectly the fundamental loop of first order cybernetics into the nascent field of cognitive psychology. TOTE: test-operate-test-exit. Goal-seeking behavior in this model follows a basic process of testing the alignment of an operation’s effect with the goal, and adjusting in kind. But consider two systems whose goals are to win out over the other one, each following the TOTE model in relation to the respective actions of each. The decisions made in one system impact the decisions made in the other, veering the entanglement of the two away from anything resembling homeostasis. Add in the variables of speed, the impossibility of achieving total information awareness in the environment, and the hard cognitive limits of the human agent gets us to the position where the role of the human in the loop becomes a liability. But it’s not just the human, as the US military learned in Vietnam: the entire infrastructure, even with the aid of the cybernetic toolkit, falls victim to the information bottlenecks, decision-making paralysis, and the fog of war. The crushing necessity of better, more efficient tools is revealed in the aftermath – but this, of course, will deepen the problem as it unfolds along the line of time.

Enter the John Boyd’s OODA loop. As with the trajectory of Wiener’s thought, Boyd’s theory was first drawn from the study of aviation combat and radiated outwards from there. OODA stands for observation-orientation-decision-action, and like the TOTE model it emphasized cognitive behavior in decision-making as a series of loops. Observation entails the absorption of environmental information by the agent or system, which is processed in the orientation phase to provide context and a range of operational possibilities to choose from. Decision is the choice of an operational possibility, which is then executed as an action. This returns the agent or system to the observation phase, and the process repeats.

 

Screenshot from 2018-03-06 14-57-17

This might look at first blush like the linear loop of first order cybernetics and the TOTE model, but as Antoine Bousquet argues this is not so:

A closer look at the diagram of the OODA “loop” reveals that orientation actually exerts “implicit guidance and control” over the observation and action phases as well as shaping the decision phase. Furthermore, “the entire ‘loop’ (not just orientation) is an ongoing many-sided implicit cross referencing process of projection, empathy, correlation, and rejection” in which all elements of the “loop” are simultaneous active. In this sense, the OODA “loop” is not truly a cycle and is presented sequentially only for convenience of exposition (hence the scare quotes around “loop”).

Early cybernetic approaches to conflict battlespace insisted achieving a full-scale view of all the variables in play – a complete worldview through which the loops would proceed linearly. It was, in other words, a flattened notion of learning. Boyd, by contrast, insists on the impossibility of achieving such a vantage point. Cognitive behavior, both inside and outside the battlespace, is forever being pummeled by an intrinsically incomplete understanding of the world. In first-order cybernetics, the need for total information awareness raised the specter of a Manichean conflict between signal and noise, with noise being the factor that impinges on the smooth transmission of the information (and thus breaks down the durability of the feedback loop executing and testing the operation). For Boyd this is reversed: passage through the world partially blind, besieged by noise, makes the ‘loop’ a process of continual adaptation through encounter with novelty – a dynamism that he describes, echoing Schumpeter’s famous description of capitalism’s constant drive to technoeconomic development, as cycles of destruction and creation:

When we begin to turn inward and use the new concept—within its own pattern of ideas and interactions—to produce a finer grain match with observed reality we note that the new concept and its match-up with observed reality begins to self-destruct just as before. Accordingly, the dialectic cycle of destruction and creation begins to repeat itself once again. In other words, as suggested by Godel’s Proof of Incompleteness, we imply that the process of Structure, Unstructure, Restructure, Unstructure, Restructure is repeated endlessly in moving to higher and broader levels of elaboration. In this unfolding drama, the alternating cycle of entropy increase toward more and more dis-order and the entropy decrease toward more and more order appears to be one part of a control mechanism that literally seems to drive and regulate this alternating cycle of destruction and creation toward higher and broader levels of elaboration.

What Boyd is describing, then, isn’t simply learning, but the process of learning to learn. For the individual agent and complex system alike, this is the continual re-assessment of reality following the (vital) trauma of ontological crisis – or, in other words, a continual optimization for intelligence, a competitive pursuit of more effective, more efficient means of expanding itself. It is for this reason that Grant Hammond, a professor at the Air War College, finds in Boyd’s OODA ‘loop’ a model of life itself, “that process of seeking harmony with one’s environment, growing, interacting with others, adapting, isolating oneself when necessary, winning, siring offspring, losing, contributing what one can, learning, and ultimately dying.” Tug on that thread a bit and the operations of a complex, emergent system begin to look rather uncanny – or is it the learning-to-learn carried out by the human agent that begins to look like the uncanny thing?

Back to hyperwar.

For Boyd, the dynamics of a given OODA ‘loop’ are the same as the scenario detailed above about the two competing TOTE systems that lock-in to speed-driven (and driving) escalation. Whichever loop evolves better and faster wins – and in the context of highly non-linear, borderless, technologically-integrated warfare, the unreliability of the human agent remains central as the key element to be overcome. Hence hyperwar, as General John Allen makes clear by trying to get a grip on the accelerationist trolley problem:

In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.

Allen suggests here that there is still some capacity for human decision-making in the hyperwar version of the ‘loop’ – but as he points out in the elsewhere, the US’s military competitors (namely: China) are not likely to feel “particularly constrained” about the usage of totally autonomous AI. A China that doesn’t feel constrained will entail, inevitably, a US that will re-evaluate this position, and it is at this point that things get truly weird. If escalating decision-making and behavior through OODA ‘loop’ competition is an evolutionary model of learning-to-learn, then the intelligence optimization that is, by extension, unfolding through hyperwar will be carried out at a continuous, near-instant rate. At that level the whole notion of combat is eclipsed into a singularity that is completely alien to the human observer that, even in the pre-hyperwar phase of history, has become lost in the labyrinth. War, like the forces of capital, automates and autonomizes and becomes like a life unto itself.