Terminator
Next week, the Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) will gather under the aegis of the Convention on Conventional Weapons at the United Nations in Geneva. Various non-governmental organizations behind the Campaign to Stop Killer Robots are urging that a new international treaty banning warbots be promulgated and adopted. As part the effort to outlaw killer robots, Human Rights Watch has issued a report, Mind the Gap: The Lack of Accountability for Killer Robots, offering arguments for a ban.
According to Human Rights Watch (HRW), the chief problem is not only that soulless machines would be unaccountable for what they do, but so too would be the people who deploy or make them. Consequently, one of HRW's chief objections is the claim:
The autonomous nature of killer robots would make them legally analogous to human soldiers in some ways, and thus it could trigger the doctrine of indirect responsibility, or command responsibility. A commander would nevertheless still escape liability in most cases. Command responsibility holds superiors accountable only if they knew or should have known of a subordinate's criminal act and failed to prevent or punish it. These criteria set a high bar for accountability for the actions of a fully autonomous weapon. … …given that the weapons are designed to operate independently, a commander would not always have sufficient reason or technological knowledge to anticipate the robot would commit a specific unlawful act. Even if he or she knew of a possible unlawful act, the commander would often be unable to prevent the act, for example, if communications had broken down, the robot acted too fast to be stopped, or reprogramming was too difficult for all but specialists.
In my column, Let Slip the Robots of War, I countered:
Individual soldiers can be held responsible for war crimes they commit, but who would be accountable for the similar acts executed by robots? University of Virginia ethicist Deborah Johnson and Royal Netherlands Academy of Arts and Sciences philosopher Merel Noorman make the salient point that "it is far from clear that pressures of competitive warfare will lead humans to put robots they cannot control into the battlefield without human oversight. And, if there is human oversight, there is human control and responsibility." The robots' designers would set constraints on what they could do, instill norms and rules to guide their actions, and verify that they exhibit predictable and reliable behavior. "Delegation of responsibility to human and non-human components is a sociotechnical design choice, not an inevitable outcome technological development," Johnson and Noorman note. "Robots for which no human actor can be held responsible are poorly designed sociotechnical systems." Rather than focus on individual responsibility for the robots' activities, Anderson and Waxman point out that traditionally each side in a conflict has been held collectively responsible for observing the laws of war. Ultimately, robots don't kill people; people kill people.
In fact, warbots might be better at discriminating between targets and initiating proportional force. As I noted:
The Georgia Tech roboticist Ronald Arkin turns this issue on its head, arguing that lethal autonomous weapon systems "will potentially be capable of performing more ethically on the battlefield than are human soldiers." While human soldiers are moral agents possessed of consciences, they are also flawed people engaged in the most intense and unforgiving forms of aggression. Under the pressure of battle, fear, panic, rage, and vengeance can overwhelm the moral sensibilities of soldiers; the result, all too often, is an atrocity. Now consider warbots. Since self-preservation would not be their foremost drive, they would refrain from firing in uncertain situations. Not burdened with emotions, autonomous weapons would avoid the moral snares of anger and frustration. They could objectively weigh information and avoid confirmation bias when making targeting and firing decisions. They could also evaluate information much faster and from more sources than human soldiers before responding with lethal force. And battlefield robots could impartially monitor and report the ethical behavior of all parties on the battlefield.
The concerns expressed by Human Rights Watch are well-taken, but a ban could be outlawing weapons systems that might be far more discriminating and precise in their target selection and engagement than even human soldiers. A preemptive ban risks being a tragic moral failure rather than an ethical triumph.
Disclosure: I have made small contributions to Human Rights Watch from time to time.
Comentarios