Not, as in the earlier post, “better than nothing.” Let me say up front that this whole “social robot” thing scares me. It’s creepy, but the path to there from here is lit up like a strip with landing lights.
These are some further reflections from my reading of Sherry Turkle’s truly wonderful book, Alone Together. Turkle doesn’t seem to me to be alarmist. She is a professor of the Social Studies of Science and Technology at MIT. I like that. She is also a clinical psychologist. I like that too. And as an experimenter, she has a touch with the children she studies that I find reassuring; she has empathy, but she persists in getting the data she needs.
In her own uneasiness, she begins with what authenticity means: “Authenticity, for me, follows from the ability to put oneself in the place of another, to relate to the other because of a shared store of human experiences: we are born, have families, and know loss and the reality of death.”
Let’s start now with Anne. She confided that she would trade in her boyfriend “for a sophisticated Japanese robot” if the robot would produce what she called “caring behavior.” And: “She was looking for a ‘no-risk relationship’ that would stave off loneliness. A responsive robot, even one just exhibiting scripted behavior, seemed better to her than a demanding boyfriend.
Can we really outsource our intimacy needs? What is a “no-risk relationship?” I understand the tradeoff that couples sometimes make. They want a high-intimacy relationship and are willing to take all the heartbreak that comes as part of the package. I understand the tradeoff other couples make to have a more formal, more predictable relationship in which each partner agrees to ride herd on the tendency to emotional extravagance. Neither of those is exactly my own choice, but I don’t find either one morally offensive. Anne is choosing a “performance of behaviors we associate with feelings of intimacy,” knowing that they are inauthentic and, using Turkle’s definition, that they don’t elicit anything authentic in her either. That’s not “better than anything” is it?
Let’s take a look at ordinary conversation. Try this one: A thirty-year-old man remarks, “I’d rather talk to a robot. Friends can be exhausting. The robot will always be there for me. And whenever I’m done, I can walk away. I know friends can be exhausting. I don’t always choose to be with friends because sometimes they can be exhausting. But what does “the robot will always be there for me” mean? “There” is a little bit troublesome because it doesn’t mean “in the room,” but “for me” is much worse. The robot is not there, in the emotional sense of “there,” and it is not “for you”—or against you. You don’t matter to it at all.
Bette says that when you prepare a plot of ground for seeds and then don’t plant and nourish them, you are just laying down a welcome mat for weeds. I know that’s right about weeds and I suspect it is right about robots. What kind of lives are we living that offer such welcome mats for weeds. Here’s a “back of the envelope” list from Turkle. She calls this invitation to weeds, “the robotic moment.”
As I listen for what stands behind this moment, I hear a certain fatigue with the difficulties of life with people. We insert robots into every narrative of human frailty. People make too many demands; robot demands would be of a more manageable sort. People disappoint; robots will not. When people talk about relationships with robots, they talk about cheating husbands, wives who fake orgasms, and children who take drugs. They talk about how hard it is to understand family and friends.
I hear such resignation in those settings. People make too many demands—and there is nothing we can do about it. People disappoint us—and we can find no way to manage or to redeem the situation. Why is the husband cheating? Why is the wife faking orgasms? Why are the children taking drugs? Does it matter? Is there really nothing we can do? The “robotic moment” seems to be made up of people who have judged their own ability to understand and deal with these situations or, at worst, to endure them with grace, as entirely inadequate. Reducing the need for action by outsourcing our emotional “relationships” to social robots seems, somehow, “better.”
One of the defenses of this tradeoff that Turkle introduces, then discards, is that these “relationships” might not be the very best thing, but there is no harm in using them.
Dependence on a robot presents itself as risk free. But when one becomes accustomed to “companionship” without demands, life with people may seem overwhelming. Dependence on a person is risky—it makes us subject to rejection—but it also opens us to deeply knowing another. Robotic companionship may seem a sweet deal, but it consigns us to a closed world—the loveable as safe and made to measure.
This seems right to me. If we learn “companionship” with demands, how will we accustom ourselves to humans? Can there be a love relationship that is “safe and made to measure?” Is there any way to open yourself to intimacy without risking rejection? I don’t think so. One of my several brothers uses the metaphor of “hostages” to describe how intimacy is built. When I tell you something true about me, something that would hurt me if it were not received and honored, I give you a hostage. Then you give me a hostage and each of us keeps an eye on how the hostages are faring. Eventually we come to know and trust each other. That doesn’t mean that nothing bad will ever happen to the hostages. It means that when something bad does happen, you and I will have the resources to sustain the relationship.
How would I accept a hostage offered to me by a robot? Could I offer a hostage to a robot?
And there is one more thing that is bothering me. It is the way we redefine ourselves to make us fit the world of robots. There don’t need to actually be robots for us to begin to do this. All we really need is to identify and accept “the performance of intimacy” as good enough. When we do that, we redefine what “intimacy” means for us and for any human friends we might have. And it isn’t just intimacy.
The meaning of intelligence changed when the field of artificial intelligence declared it was something computers could have. The meaning of memory changed when it was something computers used. Here the word “trust” is under siege, now that it is something of which robots are worthy.
Turkle’s point here, as I understand it, is that we have changed what “intelligence” means so that it can be a term that is commensurable—robotic and human “intelligence” can be judged, that is to say, by the same standard. And the meaning of memory changes—what it means to remember something—when we adopt a common standard. And not, Turkle says, “trust.” There won’t be two kinds of trust; one for humans and one for robots. The meaning of the word for us will change to accommodate the two kinds of users on a level playing field.