Robot Kills a Worker in Germany… Who’s to Blame?

Still from 2001: A Space Odyssey (MGM, 1968)

Since the onset of the Industrial Revolution, Western nations have harbored a collective anxiety about the replacement of human jobs by mechanized labor. Economists have long tried to predict which jobs will be lost to machines, but they offer no guarantees. People fear being replaced.

But robots aren’t just taking jobs. Last Monday, at a Volkwagen plant in Germany, one robot killed an employee.

The machine in question—a sophisticated automotive assembly line robot—normally operates within a circumscribed area, grabbing parts and manipulating them according to a predetermined process. As a team of workers was setting it up, however, the robot grabbed one of them—a 22-year-old man—and crushed him against a metal plate.

Volkswagen spokesman Heiko Hillwig has indicated that human error was likely to blame, not the bot. Critics are not so sure. Speculation that the robot is, or should be, to blame has already run rampant online. This isn’t just idle speculation on Twitter. It’s a reflection of much deeper fears about the world to come.

It was the cyborg!

If you’ve ever kicked a TV or mumbled threats at your computer, you’re familiar with how easy it is to assign human agency to unconscious machines. We seem especially prone to anthropomorphizing them when they frustrate us. My toaster is invisible to me when it’s working, but if it misbehaves I find myself angrily unplugging it, chastising it, and otherwise making my will known.

Today, our tendency to blame inanimate objects has begun to merge with that long-standing human fear about the rise of robotics. Even during the Great Depression workers were fretting about robots taking jobs. Newspapers ran sensational stories about bots and other machines “turning” on their masters and killing humans.

According to one 1932 story that ran in Louisiana, an inventor was demonstrating his cyborg’s precision with a firearm when the cyborg grabbed the gun, turned on the master, and critically wounded him. Of course, this never happened. As nearly as historians can tell, the inventor accidentally discharged the firearm while nestling it in his invention’s “hand.” Still, the market for such stories speaks to our simultaneous feelings of fascination and fear about robots.

However we might feel about the Volkswagen robot, German prosecutors will make the ultimate determination of who bears legal blame for the death of the young worker. German news agencies report that prosecutors are currently considering charges, and debating about whom to blame. As factories become further roboticized, more of these accidents will surely occur. We are in uncharted waters. The way we assign blame in these early cases will set a course, influencing how we define the agency and blameworthiness of machines.

The case for suing robots

There are many parties against whom German prosecutors could press charges, and their decision will have significant moral and economic consequences. Prosecutors could sue the business owner, the robot’s hardware designer, the robot’s software designer, or—most interestingly—the robot itself. Each option would have major consequences for future incentives and innovations in robotic labor.

Suing a robot might sound crazy, but the other options aren’t very good, either. It would be unreasonable to hold a factory owner accountable for every machine that she owns; as long as owners have taken certain precautions, we usually absolve them of blame for what happens in the course of work. It’s even less intuitive to consider designers responsible. Few people blame Toyota when they crash their cars. But, while in car crashes our moral sensibility and intuition causes us to blame users (that is, drivers), that seems less likely in the robotics context.

Surely no one would want to blame the workers using the robot for the incident in the VW factory, so we aren’t left with many appealing options. Our intuition to blame the bot may not be so crazy after all. We blame drivers who crash their cars because we tend to view drivers as the agent responsible for operating a machine. If we come to believe that robots themselves are agents whose behavior, like our own, is not entirely predictable, we may want to sue bots themselves, rather than user-workers. There is no guarantee that we will come to feel this way or that we must, but it is certainly one possibility.

Whether or not we are replaced by robots, chances are, in the coming decades, more and more of us will work beside them. Right now our moral intuitions about robots are not well tuned, and we’re facing new problems that stretch our judgments. All of us, not just prosecutors, will need to start coming up with answers soon.


  •' Jim Reed says:

    Its a power tool. This might be getting ahead of things on the question of the dangers of computer intelligence. One article said if a computer got too dangerous you could just unplug it. But it gets much more complicated, if you want to get into the intelligent machine question. The danger might not be from a machine, but from a world of networked intelligence. It will start simply enough, with a network designed to use intelligence to steal tiny amounts of money smarter than the competing network intelligences can. The danger doesn’t originate with the machines, but with the owners behind the scenes who are using these global intelligence agents to win the money game. If you are losing in this game, then your only choice is to let your intelligence agents move in a direction that might be more dangerous, but gives you a better chance to win. Then the other guy has to do the same thing, only make his dangerous intelligence agents faster.

  •' Sam says:

    There is no one to blame.

    Legislation should be in place though to ensure the highest reasonable level of safety. With these measures enacted all the relevant parties could be held responsible for “criminal negligence” at that point, much as they would if a building collapsed due to not living up to building standards.

    Also all such deaths and injuries should be investigated thoroughly from every possible perspective.

  •' NancyP says:

    +1 . Investigate. A designer should be able to anticipate likely safety issues, but inevitably there can be an unanticipated chain of events. If the designer and manufacturer of the robot made reasonable safety protocols and emergency auto-shut-off capacity before installation, and the unit was properly installed, maintained, and users properly trained, then an unanticipated death or injury MAY indeed be criminally “blameless” – once. “Blameless” may not indicate lack of liability, though, given the higher burden of criminal vs civil law.

  •' nightgaunt says:

    Cyborg has living parts mixed with robotic ones.
    Robots can only do what they are programmed to do, no more. They are machines not beings.
    Accidents will happen with robots as with any other human made mechanism. Sue the robot? Try suing you car or a gun or your sewing machine.

  •' nightgaunt says:

    The smartest robot on the planet is less intelligent than a common roach. Fears are from the ignorant.

  •' Jim Reed says:

    What if there was a billion roaches, and they could be networked?

  •' nightgaunt says:

    Try it. See if they all can actually work as a single unit. Be synchronized once you find a way of penetrating their brain in the right places etc. Maybe in 10 years you can come back with results.

  •' Jim Reed says:

    If all that proves too expensive, then the best thing might be to do without and just settle any issues out of court, or have all employees sign a waiver.

Leave a Reply

Your email address will not be published. Required fields are marked *