‘Killer’ Robots and the Move to Ban Them - Arvig Blog
Killer-Robots March 6, 2018

‘Killer’ Robots and the Move to Ban Them

Written by Darla Palmer-Ellingson in Home Technology for Your Home

If you were frightened by killer robots in movies such as the Matrix or iRobot, the fantasy is becoming a scary reality.

2017 marked the fourth consecutive year of U.N. discussions on “killer robots,” known scientifically as lethal autonomous weapons, or LAWs. Discussions are likely to heat up as the technology improves.

What are LAWs? They’re machines that can destroy people or property without human intervention. Some of these machines are defensive in nature, such as autonomous sentry guns or missile defense systems, but the ones that have policymakers worried are offensive machines, such as unmanned drones, that could find, monitor and attack targets via land, air, or water.

Artificial intelligence has progressed rapidly in the past few years. Many of the AI principles that make driverless cars feasible are the same ones that allow for killer robots. Cheapness is their main appeal: Compared to hiring and training soldiers, deploying them in combat, and managing their stress afterwards, robots that go to war are highly cost-effective in the long-term. Robots are also more precise than humans, cutting down on collateral damage, and they allow for fewer casualties among soldiers.

Although Russia, the U.S., Israel, and other countries are eager to snatch up these benefits, the trade-offs have given A.I. researchers serious pause.

For one, lethal autonomous weapons have no legal or moral accountability. As Peter Asaro, a science philosopher, put it, “Robots aren’t capable of intention in the legal sense.” This means a court cannot convict a robot of a crime. If a robot kills a civilian by accident, does blame fall on the robot’s designers, builders, those who deploy it, or someone else? This ambiguity in the legal system creates a veil of plausible deniability at every stage of use, potentially eroding moral and legal barriers to waging war.

Further risks include the potential for populations to become emotionally detached from war, since they won’t experience it firsthand, and for politicians to gain more power to wage war in secret. Because of their affordability, LAWs may also become prolific.

Collecting them up after a conflict and keeping them out of the hands of terrorists may prove difficult. Defunct robots might end up like land mines, which continue to wreak destruction and environmental hazards long after their purpose in war has expired. According to UNICEF, land mines kill or maim 15,000 to 20,000 people per year. 80 percent of victims are civilians. A world littered with insect-sized killing machines is possible—and hard to clean up.

There is hope

Hope is far from lost, however. Science and technology luminaries such as Stephen Hawking, Elon Musk, Max Tegmark, and Noam Chomsky have pushed for the U.N. to ban LAWs, and people have begun to pay attention.

As of August 2017, 116 experts from 26 countries have signed a petition from the Future of Life Institute for the U.N. to institute the ban. Similar bans have been successful in the past. Although the petition describes autonomous weapons as “the third revolution in warfare, after gunpowder and nuclear arms,” it also notes successes in containing dangerous technologies in the past, such as international laws against biological and chemical weapons and space-based nuclear weapons.

Most of the world’s major powers aren’t on board with the scientists, though. South Korean arms manufacturer Doodam Systems has already implemented autonomous border turrets, while Israeli Aerospace Industries has developed the Harpy, a drone that automatically finds and destroys sources of radar signals. The U.S. Navy’s Aegis missile defense system can already attack planes and missiles without human input.

Russia is particularly uncooperative on the subject of bans. At a U.N. discussion in Geneva on military AI, Russia vowed to continue developing autonomous weapons regardless of a worldwide moratorium. The Russian government thinks such weapons are too broad to define and still too rudimentary to pose a threat to world order.

Furthermore, banning the technology might hamper development in other areas of AI. Russia’s frosty response to concerns about LAWs may cause diplomatic tensions down the road, especially since Russian defense manufacturer Kalashnikov has already begun selling weapons with automatic targeting and firing. But Russia’s view that a ban on LAWs may stifle AI development elsewhere is not without merit, since many useful civilian technologies originated as military ones. The internet, jet engines, and GPS are just a few.

Whether LAWs prove to be a nightmare on the scale of nukes or a minor nuisance compared to other global challenges, only time will tell.

One thing is certain, though: Many of the brightest minds are worried. While weaponized AI might make for more precise warfare with less collateral damage, it might also pave the way to robot-bolstered tyrannies, large-scale terrorism and litter far deadlier than the land mines of yesterday

facebooktwittergoogle_pluslinkedinrss