top of page
Writer's pictureOurStudio

Roombas in the Big House?

In 1979, a robot killed a human for the first time. It happened at a Ford facility in Flat Rock, Michigan, in an elaborate five-level structure called a core stacker where 10 robots continuously stored and retrieved large metal castings. Litton Industries, which built the core stacker and the robots that toiled there, described it as an "unattended system." But according to a 1984 Omni feature about the incident, the machines actually required a great deal of intervention in practice—people had to tweak alignments and pick up dropped objects on a regular basis.

But the robots, which glided along rail-like tracks in near silence, continued operating even when fragile, fleshy human beings were nearby. And one day in 1979, one of those machines, which was equipped with sensors that allowed it to "see" some components of the system but apparently not people, rolled up behind Robert Williams and struck his head, killing him. A jury instructed Litton Industries to pay $10 million in damages to Williams' family. Presumably, the robot got off scot-free.

No account of the incident suggests the robot acted with deliberate malice, or even recklessness, but the incident set the stage for future dystopias nonetheless. We had begun to create a new category of machines that were capable of killing us—and unlike, say, cars, guns, or roller coasters, these new machines were deliberately imbued with a degree of autonomy that could potentially make their behavior somewhat unpredictable. That autonomy would only increase over time.

Thirty-six years later, the worldwide robot population has exploded, and the bots are increasingly sophisticated. Their designers have gotten more sophisticated too, and that helps mitigate some of their potential danger. The Litton Industries robots weighed 2,500 pounds and issued no warning noises when they moved. Today's robots boast sensors that help them avoid collisions with humans, they're often built out of light-weight and forgiving materials, and they're often designed to be easy to shut off.

Greg Beato

TerryColon.com


But as artificial intelligence (A.I.) systems—including bots that exist as nothing more than lines of code—become increasingly pervasive and autonomous, it's only natural to assume that their potential for unexpected and unwanted behavior is going to increase too. In short, some robots are going to commit crimes.

Take a recent project by a couple of Swiss artists. They created an automated shopping bot, gave it a budget of $100 in bitcoin per week, and instructed it to go on a buying spree at a darknet market that offered thousands of items for sale—some legal, others not.

The bot bought a variety of items, including 10 ecstasy pills. In the wake of its buying spree, various observers entertained the notion of whether or not the artists might be criminally liable for the bot's actions. But while the potential liability of the artists was indeed interesting, another possibility emerged that was even stranger than arresting human beings for something a bot did without the explicit instruction or knowledge of its creators or operators. The authorities could arrest the bot.

In this particular instance, we know a crime was committed: Ecstasy pills were purchased. And if whatever local laws are in play suggest the artists aren't criminally liable for that purchase, then who is, except the bot that committed the act?

Charging robots and other A.I. systems with crimes may seem absurd. And locking up, say, an incorrigibly destructive Roomba in solitary confinement sounds even more preposterous. How exactly do we punish entities whose consciousness arises from computer code?

These are the kinds of questions the law professor Gabriel Hallevy addresses in his 2013 book When Robots Kill: Artificial Intelligence Under Criminal Law.

Hallevy, who teaches law at Israel's Ono Academic College, argues that there are both social benefits and a legal precedent to applying criminal liability to A.I. systems when they misbehave.

There's certainly a rationale for this perspective. The coming proliferation of robots is creating a fair amount of anxiety, at least among the human punditocracy. Many of their concerns are economic in nature-they're worried that robots are on the verge of putting everyone out of work. But robot anxiety is broader than that. There are concerns about drones and privacy, concerns about how self-driving cars will make snap decisions when lives are at stake, concerns about what happens when we unleash millions of intelligent entities that have the capacity to make autonomous decisions instead of just following predictable preprogrammed routines. Decades of sci-fi stories have primed us to imagine the worst.

Perhaps our legal system can assuage these fears somewhat. "Criminal law plays an important role in giving people a sense of personal confidence," Hallevy writes. "If any individual or group is not subject to the criminal law, the personal confidence of the other individuals is severely harmed because those who are not subject to the criminal law have no incentive to obey the law." But if we understand that drug-buying bots and self-driving cars must abide by the same rules we all follow, and face similar punishments when they transgress, perhaps some of our anxieties about their potential behavior will dissipate.

Is this perspective fair to robots, though? Essentially, it puts them on the same level as people, even though they're clearly not human. The robot that killed Robert Williams in 1979 had no conception of morality. Neither did the ecstasy-buying bot.

In Hallevy's estimation, such concerns are unfounded. "Criminal liability does not require that offenders possess all human capabilities, only some," he writes. "If an AI entity possesses these capabilities, then logically and rationally, criminal liability can be imposed whenever an offense is committed."

What matters, Hallevy suggests, is not moral accountability or an A.I. system's ability to grasp concepts like good and evil, but rather culpability. If any entity—human or robot—intentionally engages in actions that are prohibited by law, then criminal liability may be imposed. (Sometimes, of course, failure to act, a.k.a. negligence, is also grounds for criminal liability.)

Conversely, robots that are sophis­ticated enough to be held criminally liable for their actions may also obtain protections under the law that go beyond those your lawnmower may enjoy. "This situation is similar to corporations, which are non-human legal entities," Hallevy explained in an email. "Corporations are subject to criminal liability, and part of that 'deal' is that they have certain basic rights. Consequently, corporations have the right to sue humans, corporations and even their 'owners' (the stock-holders). If we think of AI entities similarly as corporations, we would not see a significant difference."

In his book, Hallevy elaborates on the notion of corporations as a precedent regarding our potential treatment of robots. They're not individuals, and they have no moral sentiments or thoughts or feelings of any kind; yet we often find them guilty of crimes and impose punishments on them, independently of specific corporate employees who may also be involved in a crime's commission.

While A.I. systems may indeed be criminally liable for acts they commit in certain situations, that doesn't mean they're easily or effectively punishable. As satisfying as it might be to deliver 50 lashes to a robot butler who cuts in line in front of you at Walgreen's, that form of justice would be meaningless to the unfeeling machine.

But as Hallevy writes in his book, some traditional functions of punishment, like rehabilitation and incapacitation, are applicable to A.I. entities. A robot that commits some criminal act and doesn't learn on its own that such acts are prohibited could potentially be "rehabilitated" through reprogramming. And if reprogramming is ineffective, incapacitation for A.I. systems is largely analogous to incarceration for human beings: A killer robot that's locked up or disabled simply won't be able to kill again, regardless of its rehabilitative capacity.

In one light, the notion of heavily manacled Roombas suggests a police state run amok, a totalitarian future where the government's appetite for discipline and punishment extends to whole new classes of beings. What's compelling about Hallevy's perspective is that it involves neither pre-emption of new technologies nor expansion of the law. Instead of banning advances in robotics before they're even implemented or insisting we need to draft a wide range of new regulations, he argues that "the current criminal law is adequate to cope with AI technology." Whatever brave new worlds are coming, perhaps we're already equipped to handle them.

0 views0 comments

Comments


bottom of page