Sep 09, 2017

Cry havoc and let slip the AIs of war

BYGautham Shenoy

It’s a race that the USA is seemingly leading right now. Meanwhile, China has already embarked on a plan with the stated goal of becoming the world leader in artificial intelligence. Not to be left behind in this new version of the arms race, Russia threw its hat into the ring with president Vladimir Putin stating that whoever leads in this sphere will ‘rule the world’. This is the future of war.
What a coincidence then, that in Harlan Ellison’s 1967 short story, I Have No Mouth, and I Must Scream, it is these exact same countries that each build a supercomputer to take care of the war they’re fighting more efficiently. The result? One of them – incidentally sci-fi’s most sadistic, evil AI – crosses over into sentience and ultimately ends up wiping off almost all of humanity. Little wonder many think of strong AI or Artificial General Intelligence as being a civilisational existential risk to humanity.
It would perhaps do us all some good – and possibly save our species from extinction and/or being eliminated or enslaved – if we didn’t dismiss super-intelligence take-over scenarios from science fiction as far-fetched, and gave it some thought as many have done and are doing. Starting with autonomous weapons, against which people like Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky and 3105 AI/Robotics researchers – including the co-author of the standard textbook “Artificial Intelligence: a Modern Approach” Stuart Russel and Demis Hassabis – have signed an open letter asking for a ban on offensive autonomous weapons beyond meaningful human control.
To take another example of AI employed in the service of war – the most famous one – Skynet from the Terminator movies. Skynet wasn’t created for the US military to kill people, no sir!
Skynet was developed to merely eliminate human error, counter attack an enemy swiftly and efficiently, with a programmed mandate to ‘safeguard the world’. Once activated, it learnt and grew in intelligence till it achieved sentience. Once it realised that people trying to deactivate it are just getting in the way of its stated goal, it began looking at humans as a threat and began defending itself, starting with Judgement Day. Skynet isn’t evil, for AIs don’t think in human terms or our limited emotions. Skynet is merely trying to ensure its end goal of safeguarding the world, which it can only do if it is operational (alive?). Self-preservation is just a means to an end for a superintelligent entity.
For sure, superintelligent AI may be beneficial when it arises – estimates put it anywhere from 2022 and 2040 (50% likelihood) to 2075 (90% likelihood), with a very minority saying never at all. But a sure-fire way of ensuring that – post an intelligence explosion – the super intelligence that emerges is malevolent to humans is to program an AI from the beginning to be an instrument of war, by militaries developing cyber-weapons to achieve dominance and/or governments using AI to try and ‘rule the world’. This is one genie you cannot push back into the bottle.

To the supercomputer WOPR (War Operation Plan Response) in 1983’s WarGames, war is just another game, a simulation until a student hacker access it to play, erasing the line between simulation and reality.

Another scenario in which AI becomes machine overlords comes to us from the 1966 science fiction novel, Colossus by D.F. Jones. Later adapted into a movie, Colossus: The Forbin Project in 1970, it starts with the titular Colossus, the American supercomputer built to control its nuclear defences and end all warfare, being activated. It soon discovers a Soviet counterpart and before long these two AIs are learning from each other until they achieve true sentience, and become one entity. And Colossus then embarks on ending warfare in its own way, assuming total control of the world, for the greater good of mankind of course. So almost invariably, in science fiction, when AI has been weaponised, the loser has been our species as a whole.
Cherrypicking? Mayhap. There are more perspectives to this, other scenarios which are stories for another day, but for now suffice to say Elon Musk isn’t quite crying wolf when he says that the competition among governments for AI authority could trigger World War 3, quite possibly with an AI making a pre-emptive strike if that means it’s the best path to victory. And if it’s not governments, there’s a fear of non-state players hacking into the system. As happened in the classic 1983 science fiction movie, WarGames, where a hacker inadvertently accesses a US military supercomputer which has control over its nuclear arsenal and programmed to predict outcomes of war and learn over time by running simulations. Thinking he’s playing a game called ‘Global Thermonuclear War’ and with the AI unable now to tell the difference between simulation and reality, attempts to start World War 3. Unlike the others, this story ends well, almost.
But we’re are on the cusp of autonomous killing machines only, not full AI. And aren’t there more pressing issues facing us in the ‘here and now’; challenges that are in the present, so why bother with something that lies in the future that may not even happen? Well, prevention is better than cure, better safe than sorry (read: dead), and what’s the point of closing the stable door after the AI has bolted? After all, one does not start digging a well after the house catches fire. It may be too little, a little too late.
Also Read: Here’s why India is likely to lose the AI race

FactorDaily’s journalism is produced by some of the best brains in the story-telling business. If you like our body of work – deep reportage, domain specialist write-ups, data stories, podcasts and the like – consider supporting the FactorDaily journey.

Support FactorDaily

Gautham Shenoy is a writer of FactorDaily.