In December 2022, the Ukraine army released an instructional video for russian soldiers about How to Surrender to a Drone. In the video, we see "three men in uniform and white armbands in a trench within a snowy landscape" who are later "led to Ukrainian captivity by a small red quadcopter". Those drones are very likely operated by humans, but it's easily conceivable to imagine an automated version of them, connected to AI-driven targeting systems. Surrender or die.
In summer last year, the Financial Times ran the headline How Silicon Valley is helping the Pentagon in the AI arms race for a piece about how the "US military is opening up to defence and weapons start-ups" using AI-technology. In it, they describe the startup Saildrone, which builds autonomous sailboats that collects data for an "unrivalled database of ocean maps which could then be analysed by machine learning programs" and which provides data for climate research. In 2021, the "company was a key contractor helping the US Navy to develop an armada of artificial intelligence systems to conduct surveillance in international waters, including the Arctic Ocean surrounding Russia and the South China Sea."
Saildrone are hardly the only ones using advancements in civilian applications of AI-tech to land lucrative government contracts with the military.
In their annual Unitas exercise, the Navy "brought swarms of air and sea drones (which) collected and shared reconnaissance data that helped the multinational fleet detect, identify, and take out enemy craft more quickly." This so-called Replicator-program also envisions "a thousand 'robotic wingmen' to assist manned aircraft" and "thousands of 'smart satellites' that use A.I. to navigate and track adversaries". Elon Musks Space-X and Jeff Bezos Blue Origin are sure contenders for launching these satellites into orbit, while Eric Schmidt has hidden his new military AI-startup "White Stork" for "suicide attack drones" behind a "nesting doll of LLCs". "White Stork" is "a reference to the national bird and sacred totem of Ukraine, where Schmidt has assumed the role of defense tech advisor and financier".
What is taking shape here is a whole pipeline for AI-moderated semi-autonomous decision making on the battlefield: Aerial and naval drone swarms for intel collection, generative AI for analyzing that intel for target selection, and autonomous aerial drone systems for attacks. All any human in the loop has to do is confirm.
These autonomous weapon systems are under scrutiny by the United Nations, and a resolution on lethal autonomous weapons from November 2023 declared that "an algorithm must not be in full control of decisions that involve killing or harming humans", and that "the principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapons system involved". It wouldn't be the first UN-resolution violated by bad actors.
The US have their own regulation of autonomous weapons in the DoD Directive 3000.09, which was just updated in 2023. Human Rights Watch reviewed this updated directive and found an interesting detail: the "definition preserves the 2012 directive’s definition of an autonomous weapon system as 'a weapon system that, once activated, can select and engage targets without further intervention by a human operator', but "the 2023 directive removes the word 'human' before 'operator'", and "defines an 'operator' as 'a person who operates a platform or weapon system'."
Overtrusting AI on the battlefield
The pipeline i describe above, where autonomous drone swarms collect data which are then analyzed to generate targets with autonomous drones for deadly attacks, can easily be seen as one single platform, which means that a human can not just give a target to an autonomous weapon and hit a button, but she only has to give an approximate of an area to scan for combatants, where the whole pipeline from targeting to pulling the trigger can be fully automated. The operator just has to trust the system, it's generated targets and execute.
The word trust is crucial here: A 2022 paper found that humans overtrust AI-systems in ethical decision making, while a Preprint from October 2023 found a "strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty" in experiments trying to research "trust in the recommendations of artificial agents regarding decisions to kill". The implications are obvious: Autonomous weapon systems are developed with ever easier and broader methods to identify, select and kill, and soldiers are overtrusting these automatic systems in life-or-death decisions. The little edit in the US DoD Directive 3000.09, which eliminated the word "human" before "operator" paved a regulative way for this.
At least in parts, a pipeline like this already is applied in real war situations on the battlefield. In December 2023 the Guardian, drawing on reporting by Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, revealed ‘The Gospel’: how Israel uses AI to select bombing targets in Gaza. "The Gospel" is an "AI-facilitated military intelligence unit that is playing a significant role in Israel’s response to the Hamas massacre in southern Israel on 7 October".
It is, in short, an automatic targeting system that sucks up all kinds of intel data, including "drone footage, intercepted communications, surveillance data and information drawn from monitoring the movements and behaviour patterns of individuals and large groups". With this, the IDF was able to ramp up it's target generation from 50 per year to 100 per day, where targets are "individuals authorised to be assassinated". The system has also been called a "'mass assassination factory' in which the 'emphasis is on quantity and not on quality'. A human eye, they said, 'will go over the targets before each attack, but it need not spend a lot of time on them'." This sounds very much like the "strong propensity to overtrust unreliable AI in life-or-death decisions" from the paper mentioned above to me.
Palantier, the AI-warfare company from Peter Thiel, just announced a "strategic partnership" with Israel, to "harness" the companies "advanced technology in support of war-related missions", and it's safe to say that "The Gospel" is about to improve a lot (if you want to see these developments as "improvements"). In a demo Palantir presented in 2023, they show a ChatGPT-like bot, which guides a human operator from alerting them to a situation, to identification of the target, to strategic decision making, to "send these options up the chain of command" for the final decision to pull the trigger. In that scenario, two humans remain in the loop: The operator and the decision maker, formally staying within the boundaries set by the UN-resolution (for now).
Minotaurs and Centaurs
In the spring 2023 issue of The US Army College Quaterly, Robert J. Sparrow and Adam Henschke replaced the framing of future war operations known as "Centaur Warfighting" — meaning that the future of war lies within "human-machine hybrids" and where the Centaur-approach refers to the mythological creature that is half-human and half-horse —, with a "Minotaur Framing", refering to the mythical creature with the head of a bull and a human body, and where AI-systems effictively become the head of war operations, leading and guiding all decisions at all stages of combat, which, at this point, seems more likely than human-robot-hybrids. Data-Analytics beat the Terminator.
Two weeks ago, OpenAI deleted it's ban on using ChatGPT for "Military and Warfare" and revealed, that it's working with the military on "cybersecurity tools". It's clear to me that the darlings of generative AI want in on the wargames, and i'm very confident they are not the only ones. With ever more international conflicts turning hot, from Israels war on Hamas after the massacre on 7th october to Russias invasion of Ukraine to local conflicts like the Houthis attacking US trade ships with drones and the US retaliating, plus the competetive pressure from China, who surely have their own versions of AI-powered automated weapon systems in place, i absolutely think that automatic war pipelines are in high demand from many many international players with very very deep pockets, and Silicon Valley seems more than eager to exploit.
This AI-arms race then is a competition to become the leading corporate component of the machine head of the Minotaur.
If you have any doubts about how the AI industry in Silicon Valley will make the necessary money for their funny and exploitative office- and marketing-toys for white collar workers and the exploding energy and compute costs:
It's right here, in the AI-Military Complex that is taking shape. And i have a very bad feeling about this.