This is (yet another) reflection on a dilemma presented by the Netflix show HUMANS. As the series develops, it is showing more interest in the fundamental robotic dilemma: just how is it that “they” are different from “us.?” That’s not the part that has interested me so much. It is, rather, just how to integrate them into ordinary human society.
As I have already described in “Robot Servants and Friends (and Lovers)” Joe Hawkins, overwhelmed by the demands of taking care of the house and the kids during his wife’s absence, goes to the store and buys a robot—a Synth—to help around the house, just as the ads promise. But “Anita” as Sophie, the younger daughter, names her, is way too good at everything.
This causes some predictable problems. When Laura, the mother, offers to read Sophie a story, Sophie says she would rather have Anita read it. Her mother, she says, “reads too fast,” implying, as I understand it, that she always seems to be in a hurry to finish “the task.” Anita is never in a hurry. “But reading bedtime stories is my job,” Laura protests, and she is right. But she has a lot of other jobs too, and she is not nearly as competent as Anita is.
Some other problems are not so predictable until you see them played out. Then you say, “Well of course.” Mattie, the older daughter, is a computer nerd and objects to Anita’s presence in principle. The first morning Anita is there, she prepares and serves breakfast. It is not something the un-augmented Hawkins family is used to.
Mattie carps at Anita, “Anita, brown sugar. I hate white.” Anita turns away to get the brown sugar, but Laura intervenes, “Anita, stop.” And, to Mattie, “She’s not a slave.”
Mattie responds, “That’s exactly what she is.”
So this isn’t a problem of the “essential humanity” of the Synths; does the undefinable “human essence” exist or not. This is a problem of what happens to a small group—it’s a family group in this case, but it wouldn’t have to be—when the norms of normal social discourse are flagrantly broken in the presence of everyone. Mattie’s point is that Anita doesn’t deserve to be spoken to politely. Laura’s point is that language like that causes damage to the family. They are both right. [1]
The first point is that Mattie, in using such language in the presence of the rest of the family, causes damage to herself. Let’s leave the factual case aside. “Slave” is not a term that is applicable to a functioning family android any more than “free” is. But the factual case is not the issue here.
There is a way of conceiving this interchange in much more than utilitarian terms. Mattie needs to assess the situation, to formulate a response that aligns what she thinks is going on and how she feels about it; she needs to express that in some way that is understandable in both senses of the term—the descriptive and the emotional—to the people who are there and to whom Mattie wants to communicate. Strictly speaking, there doesn’t even need to be an object there, just an occasion. For the purposes of this model of communication, Mattie could be by herself in the middle of a desert and could be addressing a lizard in such terms and the damage I am talking about would be done to her anyway.
Sherry Turkle wrestles with this issue in her book about robot/human interaction. There is something entirely genuine about the formulation and expression of the human feelings “in interaction with” the robot. [2] Warm and loving expressions of emotion addressed to a dog or a cat or a robot do the same good to the human who formulates them. Similarly, the damage done to a dog or a cat or a robot, damages the human actor even if no one else is present.
I have not, in the paragraphs above, established that communication ought to be understood in this way, and you may not find it as persuasive as I do. I am saying only that if you think of communication that way, you will understand why I am arguing that Mattie damages herself in treating Anita as if she were a slave.
The second part of the argument, probably the one Laura had in mind, is that expressions of that kind damage the family. [3] Mattie, in the early shows of the season, is a snarky eye-rolling teenager and she makes lots of nasty remarks about her siblings. This is different and Laura knows it. The question that is open before the family is whether Anita is a safe “person” to verbally abuse. Will any family exchange be marked by this kind of nastiness on the grounds that “it” doesn’t care or shouldn’t?
So far as its effect on the family is concerned, according to this model, it doesn’t matter whether Anita is offended or if she doesn’t notice. Everyone else will notice and the gathering, whatever it is, will be taken another step toward everyone’s laziest and meanest instincts—instincts that are always there, ready to be activated like some kind of dormant virus.
Laura’a right to use the “slave” metaphor about Mattie’s nastiness. Later, she finds herself apologizing to Anita for some things that have been done to her and the apology helps her a good deal.
[1] In the case of this particular show, it turns out that “Anita” is also a self-aware robot named Mia and the Mia-consciousness is something this Synth has intermittent access to.
[2] Since the robot does not “respond,” it is not technically an “interaction,” but the human observation is placed in the context of interaction, so it feels like “interaction” is the proper frame of reference.