Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons

Posted: 22nd February 2021

Meaning-less human control Lessons from air defence systems for lethal autonomous weapons

Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons

Click to open report

A new report co-published today by DroneWars UK and the Centre for War Studies; University of Southern Denmark examines the lessons to be learned from the diminishing human control of air defence systems for the debate about lethal autonomous weapons systems (LAWS) – ‘Killer Robots’ as they are colloquially called.

In an autonomous weapons system, autonomous capabilities are integrated into critical functions that relate to the selection and engagement of targets without direct human intervention. Subject expert Professor Noel Sharkey, suggests that a Lethal Autonomous Weapon System can be defined as “systems that, once activated, can track, identify and attack targets with violent force without further human intervention”. Examples of such systems include BAE Systems’ Taranis drone, stationary sentries such as the Samsung Techwin SGR-A1, and ground vehicles such as the Kalashnikov Concern Uran-9.

Air Defence Systems are an important area of study in relation to the development of LAWS as, they are already in operation and, while not completely autonomous due to having a human operator in control, they have automated and increasingly autonomous features. Vincent Boulanin and Maaike Verbruggen’s study for the Stockholm International Peace Research Institute (SIPRI) estimates that 89 states operate air defence systems. These includes global military powers such as the US, the UK, France, Russia, and China but also regional powers such as Brazil, India, and Japan.  

The ‘Meaning-less human control’ report draws on a new data catalogue constructed by the report’s authors, Ingvild Bode and Tom Watts, to examine automation and autonomy in 28 air defence systems used across the world, and analyses high-profile failures of such systems including the downing of Iran Air Flight 655 (1988), Malaysian Airlines MH 17 (2014), Ukrainian Airlines PS752 (2020), and two instances of fratricide involving the Patriot Air Defense System in the Second Gulf War (2003).  Its central argument is that the integration of autonomy and automation into the critical functions of air defense systems has, under some conditions, made human control over specific use of force decisions increasingly meaningless.

The report argues this is happening for three reasons: (1) because of the speed at which these systems operate, (2) because of the complexity of the tasks they perform, and (3) because of the demands their use places human operators under. As more and more tasks have been delegated to machines, the human operators of air defence systems have changed from active controllers to more passive supervisors. In a practical sense, this has meant that human operators have come to fulfil minimal but at the same time impossibly complex roles lacking a sufficient understanding of the decision-making process, sufficient situational understanding, and the time to properly think about decisions. Taken together, the quality of control that human operators can exercise in specific use of force situations has incrementally become more meaningless than meaningful.

Commemorations of Iran Air Flight 655 and Ukrainian Airlines Flight PS752 – civilian airliners shot down mistakenly by air defence systems

The findings of the ‘Meaning-lesshuman control’ report are important because the role that humans should play in use of force decisions has been an area of increased focus in many recent United Nations meetings. In principle, states have agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machine “without any human control whatsoever”. But there is no agreement on what precisely makes human control meaningful, a concept originally coined by the non-governmental organization Article 36 in 2013 and since credited with pushing the regulatory agenda on LAWS beyond the gridlocked debate on how to define an autonomous system.

The findings of this report raises questions that challenges the current international debate on LAWS which focuses on  future technological developments and the dangers which “killer robots” may come to present in the future. Whilst this focus is in many ways appropriate given the potentially transformative effects which LAWS may come to have, in some ways it is also problematic because autonomous features are already being integrated into the critical functions of widely used weapon systems. In this way, the ‘Meaning-less human control’ report meets the call for closer studies of existing weapons systems and what they tell us about challenges to meaningful human control over the use of force.

Air defence systems can typically be operated in either manual or automatic mode. In manual mode, the human operator authorizes the launch of weapons and manages the targeting process. In automatic mode, however, the system can automatically sense targets and fire on them. Here, humans merely supervise the system and, if necessary, can abort the attack.

This process has shaped an emerging norm understood here as a standard of appropriateness that guides state behaviour. Norms do not necessarily point to what is universally appropriate, but often to what a particular group of actors deems as suitable in a particular context. The emerging norm traced throughout this report attributes humans a diminished role in specific use of force decisions. But the international debate on LAWS is yet to acknowledge or scrutinize this norm. Ingvild Bode and Tom Watts, the authors of the report argues that this undercuts potential international efforts to regulate LAWS through codifying meaningful human control.

The “Meaning-less Human Control” report concludes that the further integration of autonomous features into weapons systems is not as desirable nor as inevitable as is generally assumed. It makes the following recommendations for stakeholders involved in the international debate on LAWS:

  • Current ways of how states operate weapons systems with automated and autonomous features in specific use of force situations should be brought into the open and scrutinized.
  • More in-depth studies of the emerging standards for meaningful human control set by the use of other existing weapons systems with automated and autonomous features are required.
  • The report identifies three prerequisite conditions for human agents to be able to exercise meaningful human control: (1) a functional understanding of how the targeting systems operates and makes decisions, as well as its known weaknesses; (2) sufficient situational understanding; and (3) the capacity to scrutinize machine targeting decision-making rather than over-trusting the system.
  • These three prerequisite conditions set hard boundaries for the development of LAWS that should be codified in international law. They represent a technological Rubicon that should not be crossed as going beyond these limits makes human control meaningless.

Find out more – call Caroline on 01722 321865 or email us.