Scott Farrell Comments:
“Chivalry” and “warfare” are two concepts that sometimes seem to be at odds. After all, we are regularly told that “all’s fair in love and war,” and yet we know chivalry is all about being fair and respectful to others.
The truth is, soldiers today are governed by an ethical and moral standard that traces its lineage directly to the Code of Chivalry of the medieval knights. This code mandates things like respect for combatants who surrender, reasonable measures to avoid harming non-combatants, and avoidance of weapons (from crossbows to poison gas) that are considered inhumane.
Applying the ethical rules and restraints of chivalry in the heat of battle is one of the greatest challenges faced by soldiers today — and that challenge gets even greater when the soldiers in question aren’t human. As military reliance on drones, guided missiles and other high-tech, unmanned agents increases in the coming years, scientists are exploring ways to incorporate ethical constraints and “rules of engagement” into the programming software that drives these robotic weapons.
Can a military robot, in essence, learn to behave by the Code of Chivalry? And, perhaps more relevantly, what does trying to program a machine with battlefield ethics teach us about our own sense of chivalry in combat? This article, reprinted from the popular scientific journal The NewScientist, gives some intriguing insights into chivalry’s place in warfare in the 21st century.
Robot Warriors and Programmed Ethics
Technology has always distanced the soldiers who use weapons from the people who get hit. But robotics engineer Ron Arkin at the Georgia Institute of Technology, Atlanta, is working to imagine wars in which weapons make their own decisions about wielding lethal force.
He is particularly interested in how such machines might be programmed to act ethically, obeying the rules of engagement.
Arkin has developed an “ethical governor,” which aims to ensure that robot attack aircraft (like the Predator, pictured, right) behave ethically in combat, and is demonstrating the system in simulations based on recent campaigns by U.S. troops, using real maps from the Middle East.
In one scenario, modeled on a situation encountered by U.S. forces in Afghanistan in 2006, the drone identifies a group of Taliban soldiers inside a defined “kill zone.” But the drone doesn’t fire. Its maps indicate that the group is inside a cemetery, so opening fire would breach international law.
In another scenario, the drone identifies an enemy vehicle convoy close to a hospital. Here the ethical governor only allows fire that will damage the vehicles without harming the hospital. Arkin has also built in a “guilt” system, which, if a serious error is made, forces a drone to start behaving more cautiously.
In developing the software, he drew on studies of military ethics, as well as discussions with military personnel, and says his aim is to reduce non-combatant casualties. One Vietnam veteran told him of soldiers shooting at anything that moved in some situations. “I can easily make a robot do that today, but instead we should be thinking about how to make them perform better than that,” Arkin says.
Complex Scenarios
Simulations are a powerful way to imagine one possible version of the future of combat, says Illah Nourbakhsh, a roboticist at Carnegie Mellon University, Pittsburgh, U.S. But they gloss over the complexities of getting robots to understand the world well enough to make such judgments, he says; something unlikely to be possible for decades.
Arkin stresses that his research, funded by the U.S. army, is not designed to develop prototypes for future battlefield use. “The most important outcome of my research is not the architecture, but the discussion that it stimulates.”
However, he maintains that the development of machines that decide how to use lethal force is inevitable, making it important that when such robots do arrive they can be trusted. “These ideas will not be used tomorrow, but in the war after next, and in very constrained situations.”
Public Debate
Roboticist Noel Sharkey at Sheffield University, U.K., campaigns for greater public discussion about the use of automating in war. “I agree with Ron that autonomous robot fighting machines look like an inevitability in the near future,” he told NewScientist.
Arkin’s work shows the inadequacy of our existing technology at dealing with the complex moral environment of a battlefield, says Sharkey. “Robots don’t get angry or seek revenge but they don’t have sympathy or empathy either,” he says. “Strict rules require an absolutist view of ethics, rather than a human understanding of different circumstances and their consequences.”
Yet in some circumstances, a strict rule-based approach is valuable. The Georgia Tech group has also made a system that advises a soldier of the ethical constraints on a mission as they program it into an autonomous drone. That kind of tool could see practical use much sooner, says Nourbakhsh: “Similar systems exist to help doctors understand the medical ethics of treatments.”
Reprinted from The NewScientist
© 2009 Tom Simonite and The NewScientist