The Avengers, Rogue Roombas, and Robot Accountability

Ultron, looking not quite worthy of humanity's abiding trust. From The Avengers: Age of Ultron from Marvel Studios.

Artificial intelligence is taking over—at least in Hollywood. In 2013, there was Spike Jonze’s acclaimed Her. This year, there’s been Chappie and Ex Machina, both centered on the trials and triumphs of AI machines.

The latest arrival in this robo-extravaganza is Avengers: Age of Ultron, out last week from Marvel. The narrative is fairly simple: Tech genius Tony Stark (a.k.a. Ironman) builds AI program (Ultron) designed to protect humanity from evil aliens and other threats; Ultron goes rogue; our superheroes step up to the rescue.

Among the explosions and flirty banter, The Cubit detected serious questions about human agency, creative power, and the future of technology. Cubit co-editor Michael Schulson and Yale Law School student Hilary Ledwell got together online to discuss killer Roombas, golems, and whether Tony Stark would be legally liable for the depredations of his fancy robot.

Michael Schulson: You’ve been studying AI and the law, so I’m wondering: is the legal system ready for a robo-killer like Ultron? 

jarvis

Tony Stark (Robert Downey Jr.) using J.A.R.V.I.S., the AI computer that, according to the Marvel Cinematic Universe Wiki, “runs [his] Mansion and Stark Tower and serves as a user interface in all his Iron Man Armors, giving him valuable information during combat.”

Hilary Ledwell: Definitely not. If Skynet, or Ultron, go operational, we’re in trouble. Self driving cars? Friendly house bots? Even something J.A.R.V.I.S.-like? That, we could totally handle.

MHS: Wait, what’s the difference between Ultron and a friendly house-bot, really? Besides the global rampaging stuff, obviously.

HL: Well, I suppose one big question is what we mean by “ready for.” Ultron seems bent on causing global destruction. I think law is bad at stopping those kinds of actors. Gradual change qua robot butlers and self-driving cars? I think we’re more ready to design tort and criminal law regimes to handle those.

MHS: That makes sense. You can’t deal with Ultron, because he will steamroll you. But if you want to sue your Roomba, you can sue your Roomba.

HL: Exactly. If my Roomba eats my cat’s tail, I can totally sue the Roomba, or the Roomba’s hardware maker, or its program designer. Or all three. Depending on the decisions we make and how we want incentives, innovation, and legal remedies to work. I don’t think Ultron would go well in a courtroom.

MHS: Actually, I was kind of kidding about suing the Roomba itself. I figured it would just be the manufacturer or the hardware maker on the hook. But are courts ready to give legal standing to a robot?

HL: I think that would be totally weird and unwise, but there are actually a range of legal positions that suggest courts might do just that. One more reasonable position is to place respondeat superior liability on working bots. That’s master-servant liability, an old rule which makes the master (or employer) liable for the injuries caused by the servant (or employee).

Wackier proposals include criminal liability for bots themselves. If, for example, a weapons system went rogue and committed murder, it could undergo incarceration (being forbidden from performing its designated functions) or capital punishment (being shut down).

There are obviously lots of problems with this, and it strikes us as absurd, but there are in fact academic papers that advocate for that structure. There are situations in which it seems intuitively unfair to hold designers liable. Also, wouldn’t you love to be able to shut Ultron down? Should we really just sue Tony Stark for Ultron’s murderous rampages?

MHS: That’s what I was wondering! Does Stark have to pay damages for his rampaging killer bot, or can he just say, “Hey, that was Ultron’s fault, sorry.” At some point, after all, technology does escape its original intentions. That’s the whole point of AI, isn’t it—to be so intellectually agile that it can do things the makers didn’t specifically intend for it to do?

HL: Right. Scholars refer to that property as “emergence,” and, indeed, it’s often identified as the thing that will distinguish true AI from what we have now. That’s one of the central challenges of holding designers or sellers liable.

MHS: Yikes. I feel like it’s much likelier that a supersmart vacuum cleaner will kill my cat than that my security software will take over the world. (Of course I don’t have a cat).

HL: Your cat or your toes. You have toes (I assume). So, anyway, I agree—the bad news is that we’re totally not ready for Ultron. The good news is that we may be ready for Roombas. But the middling news is that even that may not be all that easy.

MHS: Because Roombas aren’t people?

HL: Well, yes, and because it might still be hard to figure out who to sue to compensate you for your rogue Roomba. Or your rogue oven that burns your smarthouse down while you’re gone, or your rogue Jarvis-butler that gets hacked and opens your house for a burglar.

MHS: Theologians are starting to worry about all this but-are-they-people? stuff, too. There’s a pastor in Florida who’s been looking at how to win over AI souls when the time comes.

HL: Ah, interesting. I wonder how you win over an AI soul? And I suppose you might worry both how we treat AI souls for their own sakes, but also what’s the most humane way to treat things that humans almost certainly will have empathy for, while remaining agnostic about the nature of AI souls.

MHS: Should I be nice to Siri? Can I kill my roomba? Stuff like that?

HL: I think so. Especially once AI becomes embodied, studies suggest that we’ll probably start to feel empathy for our bots. We might think it dehumanizing to abuse things we have empathy for, whatever the nature of the thing. We could analogize to justifications for laws against abusing animals, which we could have because we’re sure animals feel pain, or we could have even if we’re agnostic about what dogs feel because we think it’s dehumanizing to abuse the kind of thing that is a dog. If we have cute robot butlers, studies at MIT have shown that folks with less empathy are less likely to have a problem kicking them, even if the cute little bots are fine regardless.

That still seems to be a different issue than saving AI souls though, which is also totally fascinating, both for what it says about AI and what it says about evangelicals. 

MHS: Or what it says about the capacity of certain pastors to grab media attention and run with it.

I still feel like most of these accounts of AI in the general culture—whether breathless science journalism or movies like Avengers: Age of Ultron—are mostly about your stance toward certain big topics. Souls: convert them all! The future: it’s scary! (Or, the future: it’s going TO BE AMAZING.) And so on.

HL: The anticipation is giving us quite the collective adrenaline rush. I suspect, though, that we’ll have very boring Roombas, self-driving cars, and smart refrigerators before we have the metaphysically reflective Ultron.

MHS: Besides the movie Her, I can’t really think of a film that portrays AI in a serious, mundane way. It’s more Ultron-smashes-the-world type of stuff.

HL: I think Her is probably one of the most sophisticated portrayals of AI in contemporary pop culture. Partly because it’s so mundane, and partly because it focuses less on the intense, amazing, wowing technology, and explores what I at least think are much more important questions about the way technological evolution will shape human relationships—both to AI and to each other.

MHS: But we keep getting Ultron. Or surreal-murder-mystery-show AI, as in Ex Machina.

HL: I’ve heard Ex Machina is fantastic, as in fantastically entertaining, like Ultron. Hey, you can’t fault Hollywood.

MHS: Oh, sure, creepy AI sells, and it gets at these Grand Themes—does technology make us like gods? Do we make the tools, or do the tools start remaking us? In case the significance isn’t obvious enough, the climactic scenes in Ultron are set in a church.

HL: We’re fascinated with agencies outside ourselves, and that’s what emergent AI would be. Depending on your metaphysical commitments, we could make a God of sorts, a false one, or nothing of the kind. Ultron is definitely playing with those questions. Maybe not the most sophisticated treatment we’ll get, but at least it asks.

MHS: It’s funny, the God question cuts both ways. When you make something that can go far beyond yourself, are you playing god, or are you making a god? Ultron seems to lean toward the latter.

HL: Yeah, sort of leans to the latter. Plus the trope of getting punished for human hubris and the worship of false idols, i.e. Ultron. But then they also make Red Dude [Ed. – “red dude” is a benevolent AI character called Vision, which ends up fighting against Ultron]. Or maybe Ultron makes red dude? The movie is not super clear.

MHS: Red Dude gets made. He’s basically a golem. New technology, but things never change

HL: Wait, what about the Red Dude evokes the golem for you?

MHS: He’s weirdly personality-less, for one thing. He has an animating symbol on his forehead, for another. (Okay, it’s a rock, not the Hebrew word for truth, as in some golem legends, but it’s close). And he’s created to protect the people from a terrifying outside force—very much in line with golem legends.

Actually, the whole movie is weirdly similar to H. Leivick’s play The Golem, from the 1920s.

HL: What happens in Leivick’s play? Who does the golem protect, and who is Red Dude protecting? Is there a similar sort of moral going on?

MHS: In Leivick’s version, the golem goes out of control and ends up leading to the deaths of many Jews, even though he’s supposed to protect them. So it’s closer to Ultron than the friendly Red Dude. And it ends up becoming this reflection on security, and on the danger of autonomous creations.

HL: Right. I seem to remember there are also alternate iterations of the golem story, where the protection fantasy goes off without a hitch? Maybe more like Red Dude?

MHS: Exactly. In some golem tales, the golem is more like benevolent Red Dude. But in Leivick’s play, the golem freaks out and kills Jews.

Yiddish theater classic! And Marvel inspiration?

HL: A stunning combination. It sounds like we’ve been fascinated with the idea of AI as security for a long time. But then there are pretty divergent ideas as to how that will go for us? Red Dude v. Skynet?

MHS: Wait, what’s Skynet?

HL: Oh, Skynet is the thing that is supposed to secure humanity but goes rogue and sics the Terminator on humanity in the Terminator movies. Sort of like a more clever Ultron.

MHS: Oh, right! Yes, Red Dude v. Skynet.

HL: Well if we’ve been wondering since the ages of Jewish folklore, we may keep wondering. Who knows. Hope we don’t get any unpleasant surprises, especially since we do seem to continue to be focused on making golems/AI our security.

Do you think there’s a distinction in the Golem stories between the kinds of human attitudes that bring about “successful” golems vs. rogue ones?

MHS: Oh, that’s a great question. I don’t know. For the Leivick play, the timing is interesting, in that he’s writing during the early years of Zionism—in other words, when certain Jews are trying to exercise power in a very new way.

In Ultron, it’s straightforward hubris, mixed with ends-justify-the-means thinking, that seems to make things go astray, yes?

HL: Yeah, Ultron as punishment for hubris, with Red Dude as reward for pure intention? One way to rationalize the difference? I’m not sure it’s the right one, but interesting food for thought.

MHS: Exactly. Only after Tony Stark learns his lesson and comes up humbled is he able to to create Red Dude. 

HL: Right. And of course, though, one wonders if the real world will empirically reward good intention, and if that’s a good way to predict whether we get Red Dude or Ultron. 

MHS: That’s certainly a template we’ve used in other moral conversations about technology: the idea that, by examining and coming to terms with hubris, we’ll build something better. You see that in the enviromental movement. You see that applied, with scattershot results, to GMOs today.

HL: That’s a great point. Scattershot results, as you say, I think in part because those who are predisposed to such self-reflection are often not predisposed to the kind of ambition one normally sees with grand, holistic (and of course at times disastrous) innovation. The two ships may be ships in the night. But one hopes not!

MHS: Now I’m thinking of all the mushy emotional parts of the Avengers. “We need to all pull together as a team, guys, to solve the apocalypse.” But I think that’s an important point: we tend to parcel off the self-reflection (nervous East Coast intellectuals) from the ambition (Silicon Valley pioneers).

HL: Don’t think it has to be true, and hope it’s not! Oy, emotion and super hero movies go together like waffle iron and face. I’m not sure how successful those moments were.

MHS: Avengers: Ultron is somehow not a vehicle for the subtleties of human relationships.

HL: Yeah. Hopefully our desires for those commentaries can be sated elsewhere. But the gesture was valiant.