The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it's the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to day robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
Here's what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to "minimize the probability and consequences of failures" in autonomous or semi-autonomous armed robots "that could lead to unintended engagements," starting at the design stage. (.pdf, thanks to Cryptome.org.) Translated from the bureaucrat, the Pentagon wants to make sure that there isn't a circumstance when one of the military's many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.
The hardware and software controlling a deadly robot needs to come equipped "safeties, anti-tamper mechanisms, and information assurance." The design got to have proper "human-machine interfaces and controls." And, above all, it has to operate "consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement." If not, the Pentagon isn't going to buy it or use it.
Comentarios