Can we control killer robots?

  Last updated July 19, 2019 at 1:40 pm

Topics:  

Autonomous weapons are cheap and fast but there is rising concern at their ability to make decisions that value human life.


control centre killer robots autonomous weapons

Autonomous robots need human control to ensure they obey humanitarian law.


Recently, soldiers in Sudan were ordered to fire at thousands of protestors outside military headquarters in central Khartoum as riot police and secret service personnel unleashed tear gas. The soldiers, instead of shooting at the crowd, fired their weapons into the air while demonstrators began to chant: “The army is protecting us” and “One people, one army”.


But what if, instead of encountering regular Sudanese soldiers, these protestors faced killer robots?


That terrifying future could be on the horizon, says Jessica Whyte from UNSW.


And that adoption of lethal autonomous weapons will have social implications around the world.


What are killer robots?


Lethal autonomous weapons – or killer robots – are intelligent machines that can select, detect and kill targets without human control.


Many countries are racing to find ways to fight faster, more efficiently and to develop an edge on their adversaries. But can these weapons be regulated, are there moral justifications for their use, and who would be held accountable for a death at the hands of a killer robot?


The ethics behind killer robots


According Whyte, lethal autonomous weapons are incapable of fulfilling the requirements of international humanitarian law. Autonomous weaponry violates the Martens Clause – a provision of international humanitarian law that requires emerging technologies to be judged by the “principles of humanity and from the dictates of public conscience”.


That means the automatic function of selecting and engaging a human target needs to have an element of human control, otherwise it dishonours human life and dignity, she argues.


Many pro-development experts argue that lethal autonomous weapons would obey humanitarian law far more consistently than humans: also, they would not be clouded by emotional responses or subject to error.


Whyte is a political theorist who uses philosophy, history and political economy to analyse sovereignty, human rights, humanitarianism and militarism. She suggests that such pro-development arguments rest on the assumption that laws are fixed and not open to change.


Instead, she argues that we actually need human emotion to help us make moral decisions.


“By making war cheaper, by reducing the number of soldiers, it makes it far easier to wage a war.”


Ethics cannot be slotted into algorithms


Whyte also says that the military’s strategic decisions are founded on a series of situational judgements such as: does a strike need to be carried out to achieve the overall aim of the battle? And, if so, how many lives are at risk?


International humanitarian law requires that “the harm to civilians that results from an attack must not be excessive in relation to the anticipated military advantage of the attack”, she argues.


Those ethical principles can’t simply be slotted into a machine’s algorithm.


“This ‘proportionality’ standard requires human judgement and an understanding of the value of human lives.


“It isn’t an objective rule that can simply be programmed into an autonomous weapon system,” she says.


killer robot autonomous weapon military weapons

Autonomous weapons could make war cheaper, but could also pose a threat to international humanitarian law


Robots make war cheaper


Robots offer numerous potential operational benefits to the military. They can reduce a variety of long-term medical expenditures such as the cost of war-related injuries on the healthcare system. They can also stand in for humans in extremely hazardous scenarios such as exposure to nuclear substances and clearing minefields.


Supporters say the majority of human suffering, both psychological and physical, would be alleviated by deploying these machines on the battlefield.


However, Whyte says these arguments should also make us very worried.


“By making war cheaper, by reducing the number of soldiers, it makes it far easier to wage a war.


“States that don’t risk their own soldiers in warfare have fewer barriers to launching wars.


“If a machine is programmed to select its own targets, there are real questions about who will be responsible if it kills civilians or violates international humanitarian law.”


UNSW artificial intelligence expert Toby Walsh agrees, saying that with the rise of 3D printing, it is becoming easier to build these types of weaponry without a full evaluation of their consequences.


“Killer robots will lower the barrier to war. If one side can launch an attack without fear of bodies coming home, then it is much easier to slip into battle.”


But who is responsible?


One of the major concerns however is around the notion of accountability for harm caused by the robots. Who will be ultimately responsible for their actions?


Whyte says there is no evidence that there could be accountability once lethal weapons become fully autonomous.


“If a machine is programmed to select its own targets, there are real questions about who will be responsible if it kills civilians or violates international humanitarian law.”


A robot can’t substitute for humans in any legal proceedings. Also, a variety of legal obstacles means operators and military commanders, programmers, coders and manufacturers could all escape liability.


“Autonomous weapons systems will make targeting decisions more quickly than humans are able to follow. Accountability in such circumstances will be particularly difficult.


“All the arguments for the development of lethal autonomous weapons are also arguments against them.


“It is argued they will be faster, more efficient and will not have any [human] barriers to killing. Yet, all of this will also make war even more deadly, and potentially create further risks for civilians.”


And much like what we witnessed in Sudan, Whyte notes, “there is also a real risk that authoritarian regimes will use autonomous lethal weapons to repress their own populations – which is something human soldiers are often unwilling to do”.


Related


The next generation soldier – towards 2035


Autonomous Weapons Systems


Helping robots to see what we see




About the Author

UNSW Newsroom
The latest and best news from the University of New South Wales.

Published By

Featured Videos

Placeholder
Big Questions: Cancer
Placeholder
A future of nanobots in 180 seconds
Placeholder
Multi-user VR opens new worlds for medical research
Placeholder
Precision atom qubits achieve major quantum computing milestone
Placeholder
World's first complete design of a silicon quantum computer chip
Placeholder
Micro-factories - turning the world's waste burden into economic opportunities
Placeholder
Flip-flop qubits: a whole new quantum computing architecture
Placeholder
Ancient Babylonian tablet - world's first trig table
Placeholder
Life on Earth - and Mars?
Placeholder
“Desirable defects: Nano-scale structures of piezoelectrics” – Patrick Tung
Placeholder
Keeping Your Phone Safe from Hackers
Placeholder
Thru Fuze - a revolution in chronic back pain treatment (2015)
Placeholder
Breakthrough for stem cell therapies (2016)
Placeholder
The fortune contained in your mobile phone
Placeholder
Underwater With Emma Johnston
Placeholder
Flip-flop qubits: a whole new quantum computing architecture
Placeholder
The “Dressed Qubit” - breakthrough in quantum state stability (2016)
Placeholder
Pinpointing qubits in a silicon quantum computer (2016)
Placeholder
How to build a quantum computer in silicon (2015)
Placeholder
Quantum computer coding in silicon now possible (2015)
Placeholder
Crucial hurdle overcome for quantum computing (2015)
Placeholder
New world record for silicon quantum computing (2014)
Placeholder
Quantum data at the atom's heart (2013)
Placeholder
Towards a quantum internet (2013)
Placeholder
Single-atom transistor (2012)
Placeholder
Down to the Wire (2012)
Placeholder
Landmark in quantum computing (2012)
Placeholder
1. How Quantum Computers Will Change Our World
Placeholder
Quantum Computing Concepts – What will a quantum computer do?
Placeholder
Quantum Computing Concepts – Quantum Hardware
Placeholder
Quantum Computing Concepts – Quantum Algorithms
Placeholder
Quantum Computing Concepts – Quantum Logic
Placeholder
Quantum Computing Concepts – Entanglement
Placeholder
Quantum Computing Concepts - Quantum Measurement
Placeholder
Quantum Computing Concepts – Spin
Placeholder
Quantum Computing Concepts - Quantum Bits
Placeholder
Quantum Computing Concepts - Binary Logic
Placeholder
Rose Amal - Sustainable fuels from the Sun
Placeholder
Veena Sahajwalla - The E-Waste Alchemist
Placeholder
Katharina Gaus - Extreme Close-up on Immunity
Placeholder
In her element - Professor Emma Johnston
Placeholder
Martina Stenzel - Targeting Tumours with Tiny Assassins
Placeholder
How Did We Get Here? - Why are we all athletes?
Placeholder
How Did We Get Here? - Megafauna murder mystery
Placeholder
How Did We Get Here? - Why are we so hairy?
Placeholder
How Did We Get Here? - Why grannies matter
Placeholder
How Did We Get Here? - Why do only humans experience puberty?
Placeholder
How Did We Get Here? - Evolution of the backside
Placeholder
How Did We Get Here? - Why we use symbols
Placeholder
How Did We Get Here? - Evolutionary MasterChefs
Placeholder
How Did We Get Here? - The Paleo Diet fad
Placeholder
How Did We Get Here? - Are races real?
Placeholder
How Did We Get Here? - Are We Still Evolving?
Placeholder
How Did We Get Here? - Dangly Bits
Placeholder
Catastrophic Science: Climate Migrants
Placeholder
Catastrophic Science: De-Extinction
Placeholder
Catastrophic Science: Nuclear Disasters
Placeholder
Catastrophic Science: Storm Surges
Placeholder
Catastrophic Science: How the Japan tsunami changed science
Placeholder
Catastrophic Science: How the World Trade Centre collapsed
Placeholder
Catastrophic Science: Bushfires