This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and has invites you to answer these questions.
Two Boeing 737 Max planes crashed in 2018 and 2019 due to sensor failures that led to autopilot malfunctions that two human pilots were unable to overcome. Also in 2018, an Uber autonomous vehicle struck and killed a pedestrian in Arizona, even though a person in the car was supposed to be overseeing the system. These examples highlight many of the issues that arise when considering what “human control” over an autonomous system really means.
The development of these autonomous technologies occurred within enormously complex bureaucratic frameworks. A huge number of people were involved—in engineering a number of autonomous capabilities to function within a single system, in determining how the systems would respond to an unknown or emergency situation, and in training people to oversee the systems. A failure in any of these steps could, and did, lead to a catastrophic failure in which the people overseeing the system weren’t able to prevent it from causing unintended harm.
These examples underscore the basic human psychology that developers need to understand in order to design and test autonomous systems. Humans are prone to over-trusting machines and become increasingly complacent the more they use a system and nothing bad happens. Humans are notoriously bad at maintaining the level of focus necessary to catch an error in such situations, typically losing focus after about 20 minutes. And the human response to an emergency situation can be unpredictable.
Ultimately, “human control” is hard to define and has become a controversial issue in discussions about autonomous weapons systems, with many similar phrases used in international debates, including “meaningful human control,” “human responsibility,” and “appropriate human judgment.” But regardless of the phrase that’s used, the problem remains the same: Simply assigning a human the task of overseeing an AWS may not prevent the system from doing something it shouldn’t, and it’s not clear who would be at fault.
Responsibility and Accountability
Autonomous weapons systems can process data at speeds that far exceed a human’s cognitive capabilities, which means a human involved will need to know when to trust the data and when to question it.
In the examples above, people were directly overseeing a single commercial system. In the very near future, a single soldier might be expected to monitor an entire swarm of hundreds of weaponized drones, with testing already taking place for soldiers. Each drone may be detecting and processing data in real time. If a human can’t keep up with a single autonomous system, they certainly wouldn’t be able to keep up with the data coming in from a swarm. Additional autonomous systems may thus be added to filter and package the data, introducing even more potential points of failure. Among other issues, this raises legal concerns, given that responsibility and accountability could quickly become unclear if the system behaves unexpectedly only after it’s been deployed.
Artificial intelligence often relies on machine learning, which can turn AI-based systems into black boxes, with the AI taking unexpected actions and leaving its designers and users uncertain as to why it did what it did. It remains unclear how humans working with AWS will respond to their machine partners or what type of training will be necessary to ensure the human understands the capabilities and limitations of the system. Human-machine teaming also presents challenges both in terms of training people to use the system and of developing a better understanding of the trust dynamic between humans and AWS. While the human-robot handoff may be a technical challenge in many fields, it quickly becomes a question of international humanitarian law if the handoff doesn’t go smoothly for a weapons system.
Ensuring responsibility and accountability for AWS is a general point of agreement among those involved in the international debate. But without sufficient understanding of human psychology or how human-machine teams should work, is it reasonable to expect the human to be responsible and accountable for any unintended consequences of the system’s deployment?