Pentagon Fully On Board With Autonomous Weapon Systems That Kill

Screengrab / Future Of Life Instiutute / YouTube

As the power of artificial intelligence grows, the possibility of a future war filled with killer robots grows as well.

Last month’s [UN meeting on ‘killer robots____’](https://www.unog.ch/80256EE600585943/(httpPages%29/7C335E71DFCB29D1C1258243003E8724?OpenDocument) in Geneva ended with victory for the machines, as a small number of countries blocked progress towards an international ban. Some opponents of such a ban, like Russia and Israel, were to be expected since both nations already have advanced military AI programs. Surprisingly, the U.S. also agreed with them.

Proponents suggest that lethal autonomous weapon systems (LAWs) might cause less “collateral damage,” while critics warn that giving machines the power of life and death would be a terrible mistake, according to Popular Mechanics.

On July, 2,400 researchers, including Elon Musk, signed a pledge not to work on robots that can attack without human oversight.

The Pentagon’s current policy is that there should always be a ‘man in the loop’ controlling any lethal system, but the [submission from Washington](https://www.unog.ch/80256EDD006B8954/(httpAssets%29/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf) to the recent UN meeting argued otherwise:

> “Weapons that do what commanders and operators intend can effectuate their intentions to conduct operations in compliance with the law of war and to minimize harm to civilians.”

So the argument is that autonomous weapons would make more selective strikes that faulty human judgements would have botched.

Professor Ron Arkin, a roboticist at the Georgia Institute of Technology, said,

> “Most people don’t understand that these systems offer the opportunity to decide when not to fire, even when commanded by a human if it is deemed unethical,”

Arkin says humans have a tendency toward “scenario fulfillment,” or seeing what we expect to see, and ignoring contradictory data in stressful situations. This effect contributed to theaccidental shooting down of an Iranian airliner by the USS Vincennes in 1987.

> “Robots can be developed so that they are not vulnerable to such patterns of behavior,” says Arkin.

Today current artificial intelligence cannot make better battlefield judgements better than humans, but AI is getting smarter, and one day they could theoretically help limit the loss of innocent lives caught in the crossfire.

> “We cannot simply accept the current status quo with respect to noncombatant deaths,” says Arkin. “We should aim to do better.”

The U.S. decision to back the development of ‘killer robots’ is a controversial one, and the argument is far from over.

Read more.

Comments