• April 18, 2014

Programmed for Love

In a skeptical turn, the MIT ethnographer Sherry Turkle warns of the dangers of social technology

Sherry Turkle 1

Photograpy by Erik Jacobs

Sherry Turkle

Imagine standing in front of a robot, gazing into its wide, plastic eyes, and falling in love. Your heart revs up, and you hope this Other—this humanoid machine—turns your way again, tilts its head in interest, likes you back.

It happened one summer to Sherry Turkle, at a lab at the Massachusetts Institute of Technology, where she is a professor studying the impact of technology on society. She met a metallic robot named Cog—made to resemble a human, with moving arms and a head—which was programmed to turn toward whoever was speaking, suggesting that it understood what was being said. To Turkle's surprise, she found that she deeply wanted Cog to interact with her rather than with a colleague who was there that day. She realized this human-looking machine was tapping into a deep human desire to see it as alive—as good a companion as any human. She describes it almost like a schoolgirl crush.

The experience unnerved her, she says as she recounts the story one recent morning in the kitchen of her townhouse here. A trim woman with chin-length, dark hair and a ready laugh, she shivers at what she now views as a creepy moment: "I understood what I felt, even though I know that there's nobody home in Cog."

She has spent some 15 years since that day studying this emerging breed of "sociable robots"—including toys like Furbies and new robotic pets for the elderly—and what she considers their seductive and potentially dangerous powers. She argues that robotics' growing trend toward creating machines that act as if they were alive could lead people to place machines in roles she thinks only humans should occupy.

Her prediction: Companies will soon sell robots designed to baby-sit children, replace workers in nursing homes, and serve as companions for people with disabilities. All of which to Turkle is demeaning, "transgressive," and damaging to our collective sense of humanity. It's not that she's against robots as helpers—building cars, vacuuming floors, and helping to bathe the sick are one thing. She's concerned about robots that want to be buddies, implicitly promising an emotional connection they can never deliver.

The argument represents a skeptical turn for a researcher who was one of the first humanities scholars to take human interactions with computers seriously as an area of study. Because she began her academic career at MIT, starting in 1976, she had an early look at the personal computer and the Internet and now a front-row seat for robotics. She's a Harvard-trained psychologist and sociologist and refers to herself as an "intimate ethnographer," looking at how people interact with their devices. "I'm fascinated by how technology changes the self," she says.

By the mid-90s, her largely enthusiastic explorations of online chat rooms and video games had landed her on the cover of Wired magazine, making her a celebrity among the geek set. Back then her main interest was how creating alter egos in virtual worlds helped people shape their identities, as captured in her seminal 1995 book, Life on the Screen (Simon & Schuster).

She still believes in those benefits. But in recent years, she has spent more time documenting the drawbacks and hazards of technology in daily life. Warnings fill her new book, Alone Together: Why We Expect More From Technology and Less From Each Other (Basic Books), out this month, which devotes roughly half of its pages to her studies of robots and the rest to information overload and the effects of social networks and other mainstream technologies. At points the prose seems designed to grab readers by the shoulders, shake them as if out of a dream, and shout: "Put down the BlackBerry—you're forgetting how to just be!"

"We talk about 'spending' hours on e-mail, but we, too, are being spent," Alone Together concludes. "We have invented inspiring and enhancing technologies, and yet we have allowed them to diminish us."

In Turkle's view, many of us are already a little too cozy with our machines—the smartphones and laptops we turn to for distraction and comfort so often that we can forget how to sit quietly with our own thoughts. In that way, she argues, science fiction has become reality: We are already cyborgs, reliant on digital devices in ways that many of us could not have imagined just a few years ago.

Perhaps it's not so far-fetched to think that walking, talking machines will soon come a-courting—and that many people might welcome their advances.

Turkle says her shift in attitude about the influence of digital technologies grew not just from personal experiences like those with Cog, but also from her field research—hundreds of interviews with children, teenagers, adults, and the elderly encountering the latest tech gadgets. Again and again, she saw how even a relatively clumsy robot dog or electronic baby doll could spark a deep emotional response.

In her home office, she cues up a videotape from her archive to demonstrate her research method. It's from 2001, showing a 9-year-old girl interacting with a robot named Kismet that was developed by researchers at MIT's artificial-intelligence laboratory (the same group that built Cog). Kismet looks as if it could be a prop in a science-fiction film—a metal face with large eyes, wide eyebrows, and a mouth that switches among expressions of surprise, delight, disgust, and other simulated emotions.

The girl holds up a yellow toy dinosaur and waves it in front of Kismet. She moves the toy to the right, then to the left, and the robot turns its head to follow. Turkle, who can be seen off to the side (with a shorter haircut and larger glasses than now), says she gave almost no guidance to the girl—the goal was to put robot and child together and see what would happen. "It's called a first-encounter study. I say, 'I want you to meet an interesting new thing.'"

As we watch, the girl tries to cover the robot with a cloth to dress it, and then tries to clip a microphone to the robot. Soon Kismet is saying the girl's name and other simple statements, and the girl experiments with other ways to communicate with this mix of steel, gears, and microchips.

Most of the kids in the study loved Kismet and described the robot as a friend that liked them back, despite careful explanations by Turkle's colleagues that it was simply programmed to respond to certain cues and was definitely not alive. The response appears to be a natural one, Turkle says. "We are hard-wired that if something meets extremely primitive standards, either eye contact or recognition or very primitive mutual signaling, to accept it as an Other because as animals that's how we're hard-wired—to recognize other creatures out there."

Yes, children have long thought of their dolls as alive, or near enough. But Turkle argues that a crucial shift occurs when dolls are programmed so that they seem to have minds of their own. She devotes chapters of her new book to studies she did using popular robot toys in the late 1990s, including Tamagotchi, a hand-held digital pet with a small screen and buttons that asked kids to feed and nurture it at regular intervals, and Furbies, stuffed toys programmed to speak gibberish until they "learn" English from their owners (actually, they automatically spoke preprogrammed phrases after a given amount of time).

One day during Turkle's study at MIT, Kismet malfunctioned. A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn't like her, and she became sullen and withdrew to load up on snacks provided by the researchers. The research team held an emergency meeting to discuss "the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child," as Turkle describes in Alone Together. "Can a broken robot break a child?" they asked. "We would not consider the ethics of having children play with a damaged copy of Microsoft Word or a torn Raggedy Ann doll. But sociable robots provoke enough emotion to make this ethical question feel very real."

Turkle, to be clear, praises the work of the team that engineered Kismet—and she defends the ethics of her project, which in many cases presented students with commercial toy robots rather than MIT prototypes. A leader of the Kismet project, Cynthia Breazeal, says she hopes the technology can be used to create tutors for distance education that are more engaging than educational software or games, or to create robot assistants that supplement rather than replace humans.

"There are advantages to it not being a person—robots can be seen as not judgmental; people are not at risk of losing face to a robot," Breazeal says. "People may be more honest and willing to disclose information to a robot that they might not want to tell their doctor for fear of sounding like a 'bad' patient. So robots working with other people can help the patient and the care staff."

During her research, Turkle visited several nursing homes where resi­dents had been given robot dolls, including Paro, a seal-shaped stuffed animal programmed to purr and move when it is held or talked to. In many cases, the seniors bonded with the dolls and privately shared their life stories with them.

"There are at least two ways of reading these case studies," she writes. "You can see seniors chatting with robots, telling their stories, and feel positive. Or you can see people speaking to chimeras, showering affection into thin air, and feel that something is amiss."

Some robotics enthusiasts argue that these sociable machines will soon mature, and that new models may one day be judged as better than humans for many tasks. After all, robots don't suffer emotional breakdowns, oversleep, or commit crimes.

In his 2007 book, Love and Sex With Robots (Harper Perennial), David Levy, who is an expert in conversational computer software, argues that by the year 2050, some people will even choose to marry robots. By then, he says, many human couples will bring in robot baby sitters when they want to head to a holographic movie (or whatever the entertainment is by then).

"The concept of robots as baby sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby sitter responsible for the safety of their infants," he writes. "Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take an infant to do itself some terrible damage or be snatched by a deranged stranger."

Levy says his book was inspired by Turkle's earlier work. It is dedicated to one of Turkle's research subjects, a young man named Anthony, who, in Levy's words, "tried having girlfriends but found that he preferred relationships with computers." Levy sent a copy of the book to Turkle, hoping she would pass it along to Anthony and believing that he would find it encouraging.

Turkle was not pleased. She expresses frustration with the notion that Anthony would be happier with a robot than with gaining the social skills necessary to connect with a human companion.

"David Levy is saying: For someone who is having trouble with the people world, I can build something. Let's give up on him. I have something where he will not need relationships, experiences, and conversations. So let's not worry for him. For a whole class of people, we don't have to worry about relationships, experiences, and conversations. We can just issue them something."

Turkle continues: "Who's going to say which class of people get issued something? Is it going to be the old people, the unattractive? The heavy-set people? Who's going to get the robots?"

Levy's response: "Who is going to get the robots is an ethical question, and I am no ethicist. What I am saying is that it is better for the 'outcasts' to be able to have a relationship with a robot than to have no relationship at all."

For Turkle, though, the most chilling moment in the Kismet study came when the robot was at its best: when a child left the MIT lab feeling that she had had a deep moment of connection. Kismet can't reciprocate friendship, after all, or prepare kids for the more dynamic experience of interacting with people.

"What if we get used to relationships that are made to measure?" Turkle asks. "Is that teaching us that relationships can be just the way we want them?" After all, if a robotic partner were to become annoying, we could just switch it off.

Turkle's new book, Alone Together, is a coming-of-age story, with several protagonists. Robots take the first turn, with a look at how they might develop from early prototypes. The second character is the Internet, which, during the roughly 15 years since Turkle wrote Life on the Screen, has gone from infancy to young adulthood, growing up so fast that those early years are all but forgotten.

Turkle's argument is that the "always on" culture of constant checking of e-mail messages and Facebook updates has appeared so quickly that we haven't yet developed ways to balance our networked and physical lives. "Because we grew up with the Net, we assume that the Net is grown-up," she writes, in what she says is her favorite line from the book.

We've reached a moment, she says, when we should make "corrections"—to develop social norms to help offset the feeling that we must check for messages even when that means ignoring the people around us.

"Today's young people have a special vulnerability: Although always connected, they feel deprived of attention," she writes. "Some, as children, were pushed on swings while their parents spoke on cellphones. Now these same parents do their e-mail at the dinner table." One 17-year-old boy even told her that at least a robot would remember everything he said, contrary to his father, who often tapped at a BlackBerry during conversations.

Several times during our interview, Turkle's computer beeped in the corner of the room, signaling a batch of new e-mails. "I'm getting something like 30 messages a minute," she said. "Trust me, 30 people a minute don't really need to be in touch with me." She laughed. "It's not good for them, it's not good for me."

She constructs her case with story after story from her field research: Interviews with teenagers complaining about growing up "tethered" by cellphones and endlessly required to phone home; lawyers lamenting that their clients now want quick answers by BlackBerry rather than longer, more nuanced advice; college students so carefully constructing their Facebook profiles that one worried he might forget what was real and what was posturing.

Turkle says her earliest work on computers and networks may have been too optimistic. "In some ways, I look at this as a book of repentance. I do," she says. She always included "caveats," she emphasizes, even in her first book about technology, The Second Self: Computers and the Human Spirit (1984). But the issues she is concerned about now, such as privacy and how experimenting with identity online could cause people embarrassment later if everything is archived, simply weren't on her mind back then.

Now that they are, she hopes to take a more "activist stance" in provoking debate about technology.

"I'm contemplating a book that's more prescriptive," she tells me. "I'm interested in writing more on how to navigate this. Particu­larly in the areas of technology for children and the elderly."

The final major character of Turkle's new book is her daughter, Rebecca Willard, who just started college down the street, at Harvard. The book is dedicated and ends with an open letter to her—which amounts to advice to the members of her generation as well—to be thoughtful about how they use technology.

Does Rebecca buy it? I asked.

"Why don't you text her and ask her yourself?" Turkle says with another laugh.

So I do, and a couple of hours later I'm sitting in a coffee shop near Harvard Yard with Willard. It seems appropriate that the only seats available are at a communal table, where everyone can hear our conversation. Apparently that bothers none of the students there.

"Whenever I bring this up with friends, people always go, 'Oh, I've thought that—it's like you're with people, but you're not completely connected with them,'" Willard says, recalling a time she met a friend for dinner and they both stopped to check their e-mail on their phones.

She kept her own iPhone out of reach during our talk, and she says she hopes her mother's book sparks a conversation about the appropriate uses of technology. "Because," she says flatly, "I don't like it when someone uses their phone while I'm talking to them."

Meanwhile, in years to come, if anyone tries to give her mother a robot caretaker, Willard has received strict instructions: Keep it away. Sherry Turkle would rather have the complete works of Jane Austen played continuously.

Jeffrey R. Young is a senior writer for The Chronicle.

Comments

1. hiteshswayam - January 15, 2011 at 02:22 am

Aiming For Function in the Human Living Space

2. bizdean - January 17, 2011 at 07:37 am

Just one of its paragraphs sums up this overly long article: "There are at least two ways of reading these case studies," she writes. "You can see seniors chatting with robots, telling their stories, and feel positive. Or you can see people speaking to chimeras, showering affection into thin air, and feel that something is amiss."

Many people derive emotional boosts from their live kitty cats. News flash: Your cat doesn't give a damn about you.

Nothing new here.

3. notexactly - January 17, 2011 at 09:40 am

Sometimes I have thought about this looking at the play mechanical dogs that are being sold as toys to children, while my own dog is anxiously nuzzling me for attention. The significance of this, or human interaction, is that it is not 'unconditional' unlike the pop psychological claims. If I take poor care of the dog, or am cruel, the dog will want nothing to do with me, the very fact that the dog seeks out my company is a testimony to the person I am.

Even more so for human relationships. Other than parent and young child, human relationships are not automatic. The very fact that a relationship develops and presists actually says something about the parties involved, that they are contributing enough to the wellbeing of the other to actually maintain this bond. This provides a kind of feedback in that each partner tends to mold themselves to enhance the psychological benefit to the other and the overall relationship provides signiifcant psychological benefit all around.

That's where the relationship to the robot (at least at current tech levels) breaks down. Unlike a human, or even a dog, the robot gets nothing psychological out of the relationship. It can be infinitely forgiving, truly 'unconditional' in the evil sense of the word. There is no true emotional feedback. You are not benefitting the robot, there is nothing to feel good about, and other than some superficial reflexive reaction, the robot is not benefitting you.

4. flowney - January 17, 2011 at 11:08 am

As read this I thought of Steven Speilberg's movie "AI." From another perspective, I wondered whether the research on simulation was relevant to the discussion. I think that perhaps it is and, so, would be interesting to pursue in this context.

5. dmoser5 - January 17, 2011 at 12:24 pm

@bizdean:

1. Do you have cats (or any pets?)?
2. A living animal is not a robot
3. You assume much in your statement about animal behavior that is a.) anthropocentric and b.) less than an accurate assessment about what we know about cats or other mammals.

I, for one, am glad to see this skepticism in Turkle's work...it was long overdue.

6. brian_lennon - January 17, 2011 at 12:59 pm

Sherry Turkle's early work never struck me as predominantly affirmative of life on the screen. The ethnographies always suggested a mix of salutary and non-salutary effects, and Turkle faced the latter squarely.

For those of us who lived online in email discussion groups, before the blog era, and then came to feel that it was being “always on,” itself, that prevented us from thinking through that experience as scholars (rather than as journalists), the newfound enthusiasm of literary and cultural studies for the digital humanities (just for example) is straggling behind the sociology of social media. One wonders what future ethnographies of academic labor will conclude about the reactive acceleration and miniaturization of thought in scholarly blogging and the “Twitterization” of scholarly conferences, once these emergents too have faded away.

Brian Lennon
Pennsylvania State University

7. nellcooper - January 17, 2011 at 01:38 pm

I am greatly disturbed by this acceptance that there is a specific way that we *should* interact with one another and that everyone must conform to that rigidly defined behavior. People do not behave that way; communities do not behave that way. A child's feelings were hurt because she felt someone, albeit a robot, didn't like her. All children discover that there are people who don't like them. It is a part of childhood and maturity. Rejection by another child, an animal, even an adult is part of growing up. And when people grow up they have, or at least should be allowed to have, different personalities and social abilities. These are the social skills we bring to our communities. As humans we develop our own societies and communities (or "tribes") based upon how we prefer to interact.

Neuro research is increasingly demonstrating that our brains rewire and reprogram throughout our lives and that there is no "right way" to think. We are all "wired" to a certain extent. Turkle's premise that technology is acceptable so long as it meets her needs and desires and remains within the pigeonhole she has assigned it is more disturbing than our use of non-human substitutes, whether pets or plastic, for emotional connection.

A has been pointed out in the current exhibition at the Science Fiction Museum and Hall of Fame, this very question of defining humanity versus machine was one of the core themes of the 2004 version of Battlestar Galactica.

And these differences in social abilities and skills are not restricted to humans. Perhaps Ms. Turkle should expand beyond her "comfort zone" and accept that other's make different choices and that we as a species, and a society, are still evolving.

8. bizdean - January 17, 2011 at 02:26 pm

@dmoser5: You advance my point. If I had meant “cats and other mammals,” I would have said so. Dogs are a social species; cats are not. This is “what we know,” from observation of their behavior. It does not require mindreading or anthropocentric assumptions. Cats learn where to go for their dinner dish, or to get their fur stroked. This however is not social behavior. It does not involve being invested in the welfare of others.

I think the author and some commenters are defining themselves and their own self-worth in terms of others' reactions to them. As Nellcooper noted, we expect this of children. Adults should have grown out of it.

9. eelalien - January 17, 2011 at 02:37 pm

I first became interested in - and somewhat of a "fan" of - Sherry Turkle's work in 2002, when "Life on the Screen" was assigned in my doctoral program. What surprises me about this article, and in Turkle's research, is little indication of current, on-going studies on the long-term effect of human interaction with robots and computers in places like Japan, where the phenomenon has been prevalent for years now; or South Korea, where addiction to the Internet and/or video games is a known quantity. Surely some very revealing data must be available - or ought to be.

10. eelalien - January 17, 2011 at 02:41 pm

Here is one Japanese study from 2004:
http://www.irc.atr.jp/~kanda/pdf/kanda-interactive-robots-as-social-partner.pdf

11. tolerantly - January 17, 2011 at 06:15 pm

Yes. And while we're at it, Sherry, we should do all in our power to make sure that children understand their imaginary friends are *not real*.

Also, you left out the part where a robot runs for president.

12. sand6432 - January 17, 2011 at 06:21 pm

Some respectable thinkers, like Descartes, have considered animals to be merely complex machines, hence not unlike robots, and lacking consciousness that might give humans reason to treat them as moral agents. Other thinkers, like Peter Singer, accuse us of discrimination against animals because we don't recognize the common ground of morality as residing in the ability to suffer pain and pleasure. Presumably Singer would not argue for "robot rights" because they lack sentience. It appears that Turkle's studies show, however, that this distinction between cyborgs and animals is not so clear cut, at least for children and the elderly. In what sense are they "wrong"--morally, intellectually, emotionally?---Sandy Thatcher

13. bizdean - January 17, 2011 at 07:14 pm

Thank you, tolerantly, for bringing up the matter of children's imaginary playmates. Kids make up their own imaginary friends, and so must gain some emotional goodies thereby. It's probably healthy, in moderation. What difference if the imaginary playmate is instantiated in a robot body?

14. tolerantly - January 17, 2011 at 09:54 pm

@bizdean: You're welcome. It's all in Asimov anyway. If this article is representative, then I don't see that the arguments have progressed.

15. ldoll - January 18, 2011 at 07:54 am

I am amazed at the number of comments equating cats with robots. Cats are not aloof; they are just not dogs. They feel, think, react, and can be trained. There is, of course, cupboard love. But you cannnot tell me that over the course of twenty years, I only imagined a relationship with my cat. To say so only does a grave disservice to the species, and condones abuse, since they don't "feel".
My only solace is that the individuals espousing this claptrap are unlikely to inflict themselves on cats.

16. redharmony - January 18, 2011 at 10:11 am

Several things are obvious here. The first is that bizdean doesn't know a damned thing about cats. The other is that tech encroachment on our lives is fashionable to worry about. Why is noone worried that children develop relationships with cartoon characters, which has been going on now for over seventy years, longer with comic strip characters? If people can develop a feeling for and a mental relationship with an image of a fictional character that can't respond to them, why is it more disturbing that they respond to a robot which can, however artificially? Perceived personality is what is in an imaginary friend, a cartoon or movie character, or a robot. It is a projection. It's creepy, yes, but why do we feel threatened by the robot and not the fictional character? Just a matter of being used to the one and not the other?
Also the creepy behavior of email and text and social networking is offputting, but still a form of connection and communication with real people, although it may be rudely not about the person you're actually with. But manners and behavior evolve too. A generation from now there may be no one left who finds that distasteful.

17. sanworker - January 18, 2011 at 11:06 am

Turkle finds it...distasteful? disturbing? that Anthony has a hard time with other human beings and so prefers computers. Creating a robotic alternative for him, as David Levy suggests, is to "give up" on him. But who is Turkle to impose judgment on Anthony's affections? She wants him to "gain the necessary social skills" to be comfortable with people. And he'll get those skills how, precisely? from whom? What if he prefers solitude and robotics? Or...could the robot be trained to give him those skills? *That* would be interesting...

18. jsfriedman - January 18, 2011 at 01:59 pm

@sanworker Granted this is all a hypothetical/fictious scenario, does Anthony *really* prefer solitude and robotics, or is just easier for him to interact with artificial intelligence (in the same sense that a cigarette smoker might not *really* enjoy cigarettes but really wants to quit).

The problem with Turkle's article, I think, is that she sets up a false dichotomy between 'real' interactions and 'virtual' ones (insofar as suggesting that the virtual world is overtaking and replacing the real one, rather than co-existing with or supplementing it. Not that this isn't a possibility for some people, but like the kid being babysit by a robot or interacting with virtual playmates is still, in all probable likelihood, interacting with parents and real people).

I think the major problem with Turkle's argument is that it's mostly a hypothetical scenario (a warning or prediction, maybe) supported by (at best) anecdotal evidence. Okay, granted that kids like there cellphones. Is this behavior actually leaking into and eroding their real life social skills?

In short, where's the data at?

19. drj50 - January 18, 2011 at 02:21 pm

"In many cases, the seniors bonded with the dolls and privately shared their life stories with them." I admit I find this a bit troubling, but (despite the posts above) wonder why it is fundamentally different from telling your troubles to a pet (as many do, from children to seniors) -- dog or cat. Yes, dogs and (some) cats sense human moods and apparently seek to console, but no animal understands the slightest of what we are telling them about our life stories or current challenges. They don't listen because they care what we're going through -- any more than does a robot. (Yes, there is more to having a pet than telling it one's stories. There is a degree of interaction, responsibility, etc. that is good for us. I'm only addressing the question of turning to them for emotional support.)

20. eaortiz - January 18, 2011 at 04:12 pm

I wonder, though, about reacting to stridently here. What if robots can help individuals with shyness issues or other phobias that create an entry-way for interactions in the real world.

Also, is not the aggregation of skin, bone, muscle and other viscera akin to the gears, wires and chips of a robot. In both there is a ghost spirit in the machine. Who is to say one is not like the other. Seems like an evolution to me.

21. ellenhunt - January 18, 2011 at 04:29 pm

Ms. Turkle, we are ALL ethicists.

I think the people who can't see the difference between a virtual relationship and a real one are, well, barmy. That in itself is a sign of serious dysfunction.

22. austinbarry - January 18, 2011 at 10:09 pm

I wonder how many seniors share their life stories with a robot because nobody/nothing else will listen attentively, and a robot seems more animate than (for example) a volleyball.

23. mariadroujkova - January 19, 2011 at 04:07 pm

People anthropomorphize anything, from "dear diary" to pet rocks. More recently, people interact with a lot of entities in computer games where they can form attachments. I loved my Companion Cube in Portal - that was one of the points of the game. I love my hunter's pet in World of Warcraft. It saved my life... erm, my hunter's life, too many times! Good old roleplay.

24. 22118130 - January 19, 2011 at 04:45 pm

This brings up many more questions about human interactions through electronic media. I remember hearing a story about a professor who caught a student texting in class and called her down about it. The student responded to the professor with, "What? Are you antisocial?" Text messaging is considered a social medium. It is also an electronic medium, however, raising the question of whether it is really social interaction. It is certainly not the same as face-to-face interaction with another human being, which is why some of us old fogies probably view it as not the same as social interaction. Some might say it is the same as a telephone conversation, since that is also conveyed by an electronic medium. In the case of texting, however, the communication is disembodied, making it one more step removed from social interaction. One might distinguish texting from interaction with a robot by saying that it involves one human communicating with another. Even a robot, however, is programmed by a human, so interaction with the robot to some extent is communicating with the programmer. I have heard of those who send and receive hundreds of texts a day, to the point where they are worn out by the constant "social interaction". The rest of us, however, would view it as something less than real social interaction. As Sherry Turkle might say, we are so mixed up with machines nowadays, sometimes it is hard to say where the human ends and the machine begins. As we go forward, I guess we'll sort through all of this, even as the technology evolves. Marshall McLuhan may have had the first word on the subject, but it will certainly not be the last.

25. vivamongo - January 19, 2011 at 08:14 pm

@ bizdean "Dogs are a social species; cats are not. This is “what we know,” from observation of their behavior."

Au contraire - that may have been accepted as knowledge a few years back but more recent studies point towards cats being social creatures which can clearly be observed in wild settings. I've seen film of cat 'colonies' where there is plenty of social interaction to observe between cats.

So the cats are not social creatures stance is now 'old school'!

Not exactly strict 'research' but some good observations here:
http://www.messybeast.com/soc_cat.htm


26. dkardos - January 23, 2011 at 04:10 pm

"A 12-year-old subject named Estelle became convinced that the robot had clammed up because it didn't like her... The research team held an emergency meeting to discuss the ethics of exposing a child to a sociable robot whose technical limitations make it seem uninterested in the child."
It is not unethical to allow a a robot to hurt the feelings of a child. In fact, ethics demand it.
Children must be taught that robots do not have ANY emotions, let alone the ones they appear to have.
Holding an "emergency meeting" about the boo-hoo feelings of a child - boo-hoo feelings that taught her an important lesson - strikes me as patently ridiculous, and emblematic of a culture that strives to protect children at all costs - including the cost of their growth.

27. jansand - January 24, 2011 at 06:16 am

I have had interpersonal relationships with dogs, cats, seagulls, a sparrow, a swift, a rabbit, a muskrat, rats and mice and have had definite indications of actual responses. My relationships with people are frequently rather doubtful and I have had indications from them at times that they were wholly unresponsive. I had an interraction with a psychologist which indicated he was unresponsive.
I have never encountered a robot but if I meet one designed after Marilyn Monroe there might be a few unresponsive interactions I could consciously ignore.

28. ellenhunt - January 24, 2011 at 01:25 pm

Spoken like a man caught in uncommunicative quicksand, jansand. I hope you at least wonder why you are ah-lone.

29. berkeleydude - January 24, 2011 at 02:16 pm

http://www.milkandcookies.com/link/48647/detail/

Futurama had captured one of the essential messages a while ago: "Don't Date Robots"

30. edugreaser - January 24, 2011 at 04:59 pm

Folks will even bond with non-animate objects like a volleyball, if it helps them survive isolation. Remember the tumultuous, "relationship" between, "Wilson" the ball, and the character played by Tom Hanks in Castaway? That said, while I don't agree with all the concerns in this article, I think it is healthy to see folks reflect and question themselves, not blindly adopt everything.

31. jansand - January 24, 2011 at 09:51 pm

I do not wonder why I am alone, I revel in it. I am 85 years old and I do not find old ladies attractive and I find animals much more responsive than people and a great deal more basically rational than the average human. The unmitigated and frighfully horrifying and psychotic chaos of human society throughout the world and most obviously in the USA is totally repellant to me. The world is still a marvelous and fascinating place and many intelligent and perceptive people are doing interesting work. I read about them continuously but have never had any personal contact with them.

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.