ALL THE FUSS ABOUT KILLER ROBOTS

This week in Geneva, Switzerland, the United Nations’ Convention on Certain Conventional Weapons (CCW) is once again hearing from technical and legal experts on the subject of “Lethal Autonomous Weapons Systems” (LAWS), which are weapons that can make lethal decisions without human involvement—i.e., killer robots.

Parties to the CCW will consider policy questions about LAWS and whether there should be a protocol added to the CCW that would regulate or ban LAWS. Experts will debate what level of “meaningful human control” robots, or any weapon, should be required to have. The conclusions of this meeting could have far reaching ramifications for the future of war.

The following questions have been previously raised by UN experts in a 2013 report:

  • …is it morally acceptable to delegate decisions about the use of lethal force to such [autonomous] systems?
  • If their use results in a war crime or serious human rights violation, who would be legally responsible?
  • If responsibility cannot be determined as required by international law, is it legal or ethical to deploy such systems?

What are killer robots?

That depends on whom you ask. Manufactures of this technology would define a killer robot as, a robot that can make a decision to use lethal force without human intervention. However, Human Rights Watch broadens the definition to include any robot that can choose to use any type of force against a human, even if that force is not lethal. What is agreed is that all LAWS are already regulated by existing International Humanitarian Law (IHL). LAWS that cannot comply with IHL principles, such as distinction (from civilians and combatants) and proportionality (an attack must not be excessive in relation to the concrete and direct military advantage anticipated) are already illegal.

The phrase “meaningful human control” has caused some debate among diplomats. A great deal of the discussion in the LAWS debate is about humans and the term “loops”, which can be explained as follows:

-Human “in the loop”: the robot makes decisions according to human-programmed rules, a human hits a confirm button and the robot strikes.

-Human “on the loop”: the robot decides according to human-programmed rules, a human has time to hit an abort button, and if the abort button is not hit, the robot strikes.

-Human “off the loop”: the robot makes decisions according to human-programmed rules, the robot strikes, and a human reads a report a few seconds or minutes later.

-Finally, there is “robot beyond the loop”, where there is the largest concern. In this case, the robot decides according to rules it learns or creates itself, the robot strikes, and the robot may or may not let humans know.

What is the Convention on Conventional Weapons (CCW)?

Also known as the “Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects,” there are 120 nations that are ‘high contracting’ or state parties, including all five permanent members of the UN Security Council.

The CCW was adopted in 1980 and contains five separate protocols on various weapons and explosive remnants of war, already covering non-detectable fragments, mines and booby traps, incendiary weapons, blinding lasers and the explosive remnants of war. The CCW also provided a useful incubator for efforts to address the humanitarian consequences of antipersonnel landmines in the 1990s and cluster munitions in the 2000s. A Protocol VI added to the CCW banning “off the loop” LAWS, might be an option.

The Case for and Against Killer Robots

There are already plenty of examples of how technology has changed warfare. For the military, war robots can have many advantages: They do not need food or pay, they do not get tired or need to sleep, they follow orders automatically, and they do not feel fear, anger, pain or remorse. Furthermore, no one would mourn if robot soldiers were destroyed on the battlefield. The most recent and controversial example of how new technologies have changed war is the rise of drone warfare. But even these aircraft have a pilot who flies it by remote control, and it is the humans who make the decisions about which targets to pick and when to fire a missile.

On behalf of not banning LAWS some argue that robots should be regarded more as the next generation of “smart” bombs. They are potentially more accurate, more precise, completely focused on the strictures of International Humanitarian Law (IHL) and thus, in theory, preferable even to human war fighters who may panic, seek revenge or just make human mistakes.

Meanwhile, Human Rights Watch, in a report released before the CCW meeting, has argued that fully autonomous weapons would make it difficult to attribute legal responsibility for deaths caused by such systems. As the report notes: “[a] variety of legal obstacles make it likely that humans associated with the use or production of these weapons – notably operators and commanders, programmers and manufacturers – would escape liability for the suffering caused by fully autonomous weapons.”

The Campaign to Stop Killer Robots (CSKR), an international coalition working to preemptively ban fully autonomous weapons formed by a number of non-governmental organizations (NGOs) in 2012, advocates for a ban on LAWS similar to the ban on blinding lasers in Protocol IV of the CCW and the ban on anti-personnel landmines in the Ottawa Treaty. They argue that killer robots must be stopped before they proliferate, and that tasking robots with human destruction is fundamentally immoral. The biggest concern is the potential next generation of robotic weapons: “robots beyond the loop,” the ones that make their own decisions about who to target and who to kill without human control, a scary thought, indeed.

Leave a Reply

Your email address will not be published. Required fields are marked *