HitchBot Meets His Maker: What a Robot’s Murder Tells Us About Ourselves

This week, thousands of robotics enthusiasts, Philadelphians, and Canadian children are feeling sad about hitchBOT, a hitchhiking robot that traveled across Canada and Europe, aided only by cuteness, programmable charm—and our tendency to project human emotions onto inanimate objects.

Two weeks ago, the Canadian-built hitchBOT left Marblehead, Massachusetts. The goal: San Francisco. It was supposed to be an On the Road-meets-Wall-E jaunt, and a testament to the goodwill of human beings.

Instead, over the weekend, somebody in Philadelphia destroyed hitchBOT. Game over.

The timing for hitchBOT’s destruction felt oddly appropriate. Late last week, the New York Times published a startling article about Xiaoice, a Microsoft chatbot that’s gained a major following in China. Xiaoice carries on a lifelike conversation with users. Millions of people chat with it every day. In a Her-like twist, many express affection, even love, for their digital companion.

“We’re forgetting what it means to be intimate,” Sherry Turkle told the Times. “Children are learning that it’s safer to talk to a computer than to another human.”

My inclination is to agree with Turkle about Xiaoice—and to feel totally gooey and glum about adorable little hitchBOT. In this inconsistent blend of emotions, I doubt I’m alone.

Both are robots, designed to elicit emotions from people. Both have demonstrated how easy it is to do so, even for inanimate objects. So why do I feel creeped out by Xiaoice, but sad about hitchBOT? Is it just a cultural thing—a xenophobic fear of smartphone-wielding strangers, versus a comfort with cuddly robo-travelers? What tools do we have to navigate the line between a creepy robot-human interaction, and an adorable one?

These questions aren’t strictly academic. From the Microsoft paperclip to Siri, millions of people interact daily with human-mimicking computer programs. Earlier this summer, Aldebaran, a French robotics manufacturer (I’ve written about them before) released Pepper, a sophisticated “emotional robot” with a “focus on affection,” that costs just a bit more than a MacBook Pro. Microsoft has a creepy new commercial about how your children “will grow up with Windows 10.” And as Xiaoice illustrates, shades of Her are materializing faster than many of us would have expected, sitting in theaters two years ago.

In the face of this strange new roboticized world, Turkle’s style of response about “what it means to be intimate”—which assumes that there’s something corrupting about intimacy between humans and non-humans—seems totally off-base. After all, we already have robust social precedents for this kind of intimacy.

There’s nothing new about seeing humanness in the world around us. We anthropomorphize, finding the human (anthropos) form (morphe) in all kinds of relationships, both natural and supernatural.

Starting with the natural, consider the case of domestic animals, our pets. Dogs and cats are independent, flesh-and-blood beings, of course. But the pets that we know and love are, in part, projections of our minds. Basically, we anthropomorphize the hell out of them. We give them names, and imaginary voices, and speak of their motivations in unabashedly human terms. We claim to understand their internal lives, and we make them characters in our own stories.

Does this make the love between people and their pets any less real? Well, no. There is something marvelous and, to my mind, very true about the bonds that can develop between people and animals. But in some ways, we are also alien to each other–a gap that we bridge (at least partly) with our minds.

I don’t think it’s solipsistic to say that we are always, at some level, intimate with entities that are partially of our own creation. In any kind of relationship, we have to grapple with the disparity between our expectations and realities.

Children may only just now be learning, as Turkle suggests, “that it’s safer to talk to a computer than another human.” But if Old Yeller and many, many people’s child- and adulthoods are any indication, humans have known for a long time that it’s sometimes nicer to talk to a non-human entity, whom we sort-of-imagine is able to comprehend language.

There’s an analogy here, too, with religion. A believer might assume an intimacy with a god she can neither see nor hear, and whose essential nature she is unable to probe. You need not be an atheist to consider the anthropologically-inflected perspective that believers project their ideas, desires, and expectations onto their objects of worship. And it is not foreign to theology, either, the idea that human beings might have created God in their own image.

I don’t mean to equate animals, gods, and robots. The point is that we’ve long formed relationships with non-human entities whose sentience is non-existent (in the case of robots, imaginary friends, and beloved fictional characters), impossible to plumb (in the case of animals), or difficult to confirm (in the case of deities).

When it comes to these new tools, the question is not should we form relationships that involve a substantial amount of projection and anthropomorphizing? Instead, we have to ask ourselves, how are these new relationships unique? Does it matter if a robot is embodied (like hitchBOT) or disembodied (like Xiaoice)? Does it matter if it’s the product of a fun, communal experiment (hitchBOT) or a multinational corporation (Xiaoice)? How quick should we be to judge the intimacies of others?

As humans, we are often the animators of our own worlds. In this sense, the study of religion might offer some of the smartest guidance to the strange future of Xiaoice, Pepper, Siri, and hitchBOT. Few other areas of human endeavor, after all, have spent more time trying to disentangle the dynamics of human relationship with the inhuman. We may not all believe in God. But increasingly, we are all conversant with non-human things.