top of page
  • Adrien Book

5 Ways Artificial Intelligence Will Forever Change War

Anyone with a wet finger in the air has by now heard that Google is facing an identity crisis because of its links to the American military. To crudely summarise, Google chose not to renew its “Project Maven” contract to provide artificial intelligence (A.I) capabilities to the U.S. Department of Defense after employee dissent reached a boiling point.

This is an issue for Google, as the “Do No Evil” company is currently in an arm-wrestling match with Amazon and Microsoft for some juicy Cloud and A.I government contracts worth around $10B. Rejecting such work would deprive Google of a potentially huge business; in fact, Amazon recently advertised its image recognition software “Rekognition for defense”, and Microsoft has touted the fact that its cloud technology is currently used to handle classified information within every branch of the American military. Nevertheless, the nature of the company’s culture means that proceeding with big defense contracts could drive A.I experts away from Google.

Though project Maven is reportedly a “harmless”, non-offensive, non-weaponised, non-lethal tool used to identify vehicles in order to improve the targeting of drone strikes, its implementation raises some serious questions about the battlefield moving to data centers, and the difficulty of separating civilian technologies from the actual business of war. Try as they may to forget it, the staff at Google know that the images analysed by their A.I would be used for counter-insurgency and counter-terrorism strikes across the world, operations that are seldom victim-free.

Those worries are linked to those of about 100 of the world’s top artificial intelligence companies (including potential James Bond villain Elon Musk), who last summer wrote an open letter to the UN warning that autonomous weapon systems that could identify targets and fire without a human operator could be wildly misused, even with the best intentions. And though Terminators are far from reality, as is full lethal autonomy, some worrying topics are rising up in the field.

This is, of course, no wonder, as the creation of self-directing weapons constitutes the third weaponry revolution after gunpowder and the atomic bomb. For this reason, the U.S is in the midst of A.I arms race: military experts say China and Europe are already investing heavily in A.I. for defense. Russia meanwhile, is also headed for an offensively-mind weaponised A.I. Putin was quoted last year saying “artificial intelligence is the future not only of Russia but of all of mankind … Whoever becomes the leader in this sphere will become the ruler of the world”. Beyond the inflammatory nature of this statement, it is (partly) true. In fact, the comparative advantages that the first state to develop A.I would have over others — militarily, economically, and technologically — are almost too vast to predict.

The stakes are as high as they can get. In order to understand why, we need to go back to the basics of military strategy.

1. The End Of MAD

Since the Cold War era, a foundation of the military complex has been the principle of Mutually Assured Destruction, a concept which means that any attacker would be retaliated against and in turn destroyed should it fail to fully destroy its target in a first-move attack. Hence, countries have continued to seek first strike capability to gain an edge, and technically may chose to use it if they see the balance of Mutually Assured Destruction begin to erode, whether on one side or another.

This is where A.I comes in: with mass surveillance and intelligent identification of patterns and potential targets, the first mover could make the first move virtually free of consequence. Such a belief in a winner-takes all scenario is what leads to arms races, which destabilizes the world order in major ways as disadvantaged states with their backs against the wall are less likely to act rationally in the face of Non-MAD, and more likely to engage in what is called a pre-emptive war — one that would aim to stop an adversary from gaining an edge such as a great military A.I.

Autonomous Weapons don’t need to kill people to undermine stability and make catastrophic war more likely: an itchier trigger finger might just do.
War, artificial intelligence, trump versus baby

2. Use It Or Lose It

Another real danger with any shiny new toy is a use-it-or-lose-it mentality. Once an advantage has been gained, how does a nation convey that MAD is back on the menu? We don’t have to look far back into our history to know what most nations would do if a weaponised A.I is created: once the U.S developed the nuclear bomb, it eagerly used it to A) test it, B) make its billion-dollar investment seem worth it (no point spending $$$ for nothing) and C) show the Russians that they had a new weapon anyone ought to be afraid of, changing the balance of power for the following decade. Any country creating lethal A.Is will want to show it off for political and strategic reason. It’s our nature.

3. The Dehumanisation of war

A common (read: stupid) argument in favour of automated weapons claims that deploying robotic systems might be more attractive than “traditional warfare” because there would be fewer body bags coming home since bots would be less prone to error, fatigue, or emotion than human combatants. The last part is a major issue. Emotions such as sadness and anger is what makes us human in both times of war and peace. Mercy is an inherent and vital part of us, and the arrival of lethal autonomous weapon systems threatens to de-humanize victims of war. To quote Aleksandr Solzhenitsyn: “the line separating good and evil passes not through states, nor between classes, nor between political parties either — but right through every human heart — and through all human hearts.

Furthermore, developed countries may suffer less casualties, but what about the nations where superpowers wage their proxy wars? Most do not have the luxury of having robotics companies, and as such would suffer the human cost of war, as they have for the past century. Those countries, unable to retaliate on the battlefield, would be more likely to turn to international terrorism as it’s the only way they could hurt their adversaries.

4. Losing Control

Assuming the new technology we speak about is a mix of artificial intelligence, machine learning, robotics and big-data analytics, it would produce systems and weapons with varying degrees of autonomy, from being able to work under constant human supervision to “thinking” for themselves. The most decisive factor on the battlefield of the future may then be the quality of each side’s algorithms, and combat may speed up so much that humans can no longer keep up (the premise of my favourite Asimov short story).

Another risk with regards to this is the potential for an over-eager, under-experienced player losing control of its military capacities (looking at you, Russia). The fuel for A.I is data. Feed a military bot the wrong data and it might identify all humans within range as potential target.

5. Dumb Bots

The strong potential for losing control is why expert don’t fear a smart A.I. They fear a dumb one. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. As such, it is unlikely we will ever fully understand or control certain types of robots. Knowing that some may have guns is a very good reason to ask for more oversight on the matter.

The main problem we face today is not the above, but the fact that the uninformed masses are shaping the issue as Terminator vs Humanity, which leads to a science-fiction narrative instead of the very real, very tough challenges that currently need to be addressed in the realm of law, policy and business. As Google showed with the publication of its vapid set of principles regarding how it will ethically implement artificial intelligence, companies cannot be trusted to always make the right choice. As such, we need a practical, ethical, and responsible framework organised internationally by the relevant organisations and governments in order to create a set of accepted norms.

Automated defense systems can already make decisions based on an analysis of a threat — the shape, size, speed and trajectory of an incoming missile, for example — and choose an appropriate response much faster than humans can. Yet, at the end of the day, the “off switch” to mitigate the harm that killer robots may cause is in each one of us, at every moment and in every decision, big and small.

This has always been the case.

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page