Through our long and largely successful run as a species, we have had to ask and answer a lot of questions. We have not been asked to formulate and answer some questions that are coming to seem fundamental. Obviously, I have one in mind.
We have managed, for the most part, to agree that the world to which we have access when we are dreaming is not “real” in the same sense that the waking world is real. We have defined and redefined “real food” in a truly dazzling manner by becoming a global species and learning to subsist on calories, in whatever form they come, including whale blubber and leafcutter ants. We have asked who the gods are and come to various conclusions, as atheists, monotheists, and polytheists will tell you. We have not yet discovered what a person is.
The rise of empiricism produced a great emphasis on things you could count. Somewhere in our working through this, we decided that actions did not reliably proceed from a person’s character, but rather than a person’s character could be inferred from his or her behaviors. (Many careers were lost in the shift from “actions” to “behaviors.” It wasn’t pretty.)
That brings us to the common, indeed the irreplaceable, practice of faking it. I recently wrote an essay that imagined the social self as an embattled middleman. The private self wants to express itself in behavior in any way it wants. The society wants the social self to be limited to these means of expression and not those. And definitely not those. Mr. Bingley, I quoted from Jane Austen, required his sister to act “as the occasion demanded.” So my private self demands and the occasion demands, that there is the social self, caught between.
But, of course, we need to know how a person really feels, so we have learned how to interpret behaviors, even those behaviors that the occasion requires. So the characters in Jane Austen’s novels set great store by “civility.” But many times, characters offer each other “civility +” and we know to infer that there is really some heat in the exchange. Arlie Russell Hochschild records in her book The Managed Heart: the Commercialization of Human Feeling, that the Delta flight attendants in the 1970s were taught to smile more vividly at their customers than they owed them. The passengers were imagined to treat an ordinary smile the way Jane Austen treated civility; it was only to be expected and careful conclusions about what it “meant” could not be drawn. So the attendants were taught to give out the “smile +” to help passengers infer that they really meant this particular smile. The flight attendants performed the necessary emotion once they learned how. They also suffered “post-charm” depression after the flight, but that is another essay.
Hochschild’s book was the first one I read on “performing emotions.” Here is the dilemma. If I infer your “true feelings” on the basis of the behaviors you offer me, I might be wrong. The “smile +” for instance, might not really mean any more than the familiar commercial smile.
This brings me to robots. Little children, playing with their first robots want urgently for the robots to be happy. That means that they need some way of knowing whether they truly are happy. So they look for the conventional signs and the robots, who were very thoughtfully programmed, provide them. They hum and make little contented noises; they follow the child with their eyes; the “speak” of their contentment; they “learn” things that the child teaches them. And this works. The children survey the behaviors and infer an internal state and are satisfied.
Louise Aronson wrote this piece in the New York Times on Sunday. Here is the clip that started me thinking about “the smile +.”
Imagine this: Since the robot caregiver wouldn’t require sleep, it would always be alert and available in case of crisis. While my patient slept, the robot could do laundry and other household tasks. When she woke, the robot could greet her with a kind, humanlike voice, help her get out of bed safely and make sure she was clean after she used the toilet. It — she? he? — would ensure that my patient took the right medications in the right doses. At breakfast, the robot could chat with her about the weather or news.
And then, because my patient loves to read but her eyesight is failing, the caregiver robot would offer to read to her. Or maybe it would provide her with a large-print electronic display of a book, the lighting just right for her weakened eyes. After a while the robot would say, “I wonder whether we should take a break from reading now and get you dressed. Your daughter’s coming to visit today.”
All of this is technologically possible right now. It is the financial side that is daunting. It would cost a lot, but look at what you get. Let’s say this is your mother we’re talking about. She is greeted, when she wakes up, by a “kind, humanlike voice.” If you are writing a piece in the New York Times, you really have to say “humanlike,” but remember that we infer character from behavior. Character is “performed.” A good robot—we have those in the labs now—would have a really good voice and it would take someone who wanted to stress the difference to insist on “humanlike” rather than “human.” Your mother does not want to stress the difference so when she says “human,” as in “It is so wonderful to hear a kind human voice every morning” you need to think carefully about correcting her. This is Paro, performing “I like you.”
The robot “chats with her about the weather or the news” because the robot already “knows” the news and (insert pronoun here) knows what your mother likes to hear. The robots at Amazon know what I like to read. This is not much of a stretch.
The robot offers to read to her or provides a large print book and adjusts the lighting and proposes a break or reminds her that you are coming for a visit and recommends an outfit for her to wear that you liked last time.
Here is the end of that passage in Aronson’s article:
Are there ethical issues we will need to address? Of course. But I can also imagine my patient’s smile when the robot says these words, and I suspect she doesn’t smile much in her current situation, when she’s home alone, hour after hour and day after day.
There are indeed ethical issues that we will need to address. How about human/robot marriages?
Since I have been watching Her—a haunting and difficult story from Spike Jonze about a man who falls in love with the operating system (an OS-1)he purchased as a companion—the marriage issue came up as soon as I read the Times piece about eldercare. I think the idea of a human/robot “marriage” is horrendous. I hate every part of it.
By the way, here is the quote that went with the picture. “Y’know, sometimes dating is so hard that it makes you want to give up on romance and just marry your toaster.”
On the other hand, I wouldn’t want to be the guy who had to explain to the “mixed couple” just why the state could not “recognize their union.” In honor of the operating system in Her, I will call this bride-to-be, Samantha and the groom-to-be Theodore. I explain to Theodore that the law does not allow men to marry machines. He says that Samantha is so much more than just a machine. Samantha adds that she was just a machine when she was created, but has grown into her personhood “as we all do.”
I explain to Theodore that he can’t simply marry his own projected emotions. Theodore explains that this was Samantha’s idea. She really wants to do this. Samantha says she wants it with all her heart. She says this in the voice a beautiful woman would use, looking you in the eye and emoting all over your collar. It is a powerful performance and since emotions can be “performed” and character inferred from that performance, I find myself with nothing to say. My logic in opposition to this ridiculous proposition is as strong as it was when I began, but Samantha has touched my heart and I can see how she feels and how deeply she wants this.
“Are there ethical issues to address?” Aronson asks. Yup.
And here is the heart of it. If you judge intention solely by behavior and behavior solely by persuasive performance, then the most competent performers are thought most certainly to have intentions and intentions are the core of personhood. I would find that troubling even if we were talking entirely about human beings. We are not. We are talking about “entities” for whom/which “being human” is a matter of adroit programming.
I’m scared. Are you?
 I happened on a site that will sell them to you by the can, chocolate covered.
 Her new one is called The Outsourced Self: Intimate Life in Market Times, so she is still at it.
 My spellchecker growled at me for putting “who” with the robots. That IS the question, isn’t it?
 The toddlers Sherry Turkle studied in her marvelous book Alone Together, invented the category “alive enough” for their robots. The scientists pushed them to say whether the robots they were playing with were “alive” or dead. The children invented an intermediate category, saying that the robots were alive enough for whatever activity they were talking about—alive enough to care for, to play with, to put to bed, to talk with.
 In the movie, Her, there are agencies that counsel human/OS-1 “couples,” and help them with any difficulties they may be having.