An overwhelming majority of participants at the World Economic Forum “(WEF”) have indicated their support for sending autonomous weapons in place of human soldiers to war.
The hypothetical question was posed by the moderator to WEF participants hosted in Dalian, China, on Wednesday during a session entitled “What if: Machines outsmart us all”:
“If a war broke out today, would you rather send an army of AI [Artificial Intelligence] robots, or the sons and daughters of your community?”
Not a single participant voted to send the sons and daughters of their community, with the majority raising their hands in favour of AI robots. Similar results ensued when WEF organisers posed the question in a public poll.
In response to the clear mandate participants had accorded AI robots, a panelist at the session – Professor Stuart Russell from the University of California, Berkeley – pointed out that participants were thereby choosing to be attacked by lethal robots in a war given the likelihood of an arms race if this route is pursued by even one country.
Professor Russell further cautioned against the deadliness and reach of autonomous weapons, citing an example of a large cloud of insect-like robots that could annihilate millions of people with ease:
“The capacity to kills millions of human beings would be in the hands of anybody who could afford to spend 20 million dollars to buy a million microscopic robots – each carrying a one- gram explosive charge that could blow a human head.
“A large military [is not needed] to run [these robots]. The balance of powers will be tipped away from nations towards non-state actors – a direction we don’t need to go.”
Professor Russell’s views echo that of an illustrous list of nearly 20,000 AI and robotics experts from top universities such as Harvard and MIT, as well as senior executives from technology companies such as Apple and Google who have signed an open letter against the development of autonomous weapons.
Notwithstanding global awareness in the intellectual community on the perils of autonomous weapons, governments are investing in AI weapons programs and – as one anonymous source revealed – some countries are on the brink of creating these lethal AI systems.
The United Nations has held three meetings on lethal autonomous weapons systems since 2013 as calls for a ban or an international treaty to limit the use of autonomous weapons in warfare resound within the intellectual community.
A supporter of a ban on autonomous weapons and signatory of the open letter, Dr Daniel Murfet from the University of Melbourne, criticises the underlying premise of – as he describes – an emotive question seeking a choice between sending human soldiers or AI robots to war.
Dr Murfet explains, “[The question] is a fallacy. Because war isn’t starting [today], unless [countries] make it happen. Building autonomous weapons increases the chances of war because such weapons decrease the costs of going to war. And if there is a war, eventually many people will die too.”
It remains to be seen if public sentiment or the views of the intellectual community will yield a greater influence on this contentious and immediate issue.