AR

Total Recoil: The Uncanny Valley Is an Uncanny Cliff

This is part two of a series on what uncanny and the uncanny valley mean and how to accurately use them to describe experiences (in VR or otherwise).

TL;DR

  • The Uncanny valley is specifically about the relationship between being human-like and likability. 
  • Every act of perception involves an act of categorization

  • Science shows that toy non-humanoid robots (think WALL-E) are preferred to human-like ones (and actual humans!)

Background

1919 – Sigmund Freud published an essay called The Uncanny” (translated from the German unheimlich), defining it to mean weird/eerie/unfamiliar.

1970 – Valley of Eeriness (translated from the Japanese bukimi no tani) was coined by robotics professor Masahiro Mori to model affinity for androids as they become more humanlike.

1978 – Uncanny Valley first appeared in English inside a book by Jasia Reichardt called "Robots: Fact, Fiction, and Prediction.”

2007 – Uncanny Cliff was introduced to more accurately reflect the shape of the curve.   


What Is the Uncanny Valley?

The uncanny valley is a very precise feeling of weirdness or fear that nearly-humans (but not quite humans) evoke. The uncanny valley is elicited when a robot or AI has some human likeness, but isn’t human. Long time art director and VR/AR insider Spencer Lindsay uses “creepy corpse” in his definition of the uncanny valley.

You can see the valley in the graph below:

Image credit  here

Image credit here

It’s assumed that liking of a robot increases the more human like that it is, until it crosses into the valley (the grey area). Lindsay’s creepy corpse would appear to be at the nadir of the valley – with zombies! 

It is likely that the uncanny valley response is evolutionarily adaptive. Studies done with rhesus monkeys show a similar pattern of likability. Monkeys will look longer at the unrealistic (left) and real (right) faces below, rather than the realistic one in the center. 

Monkey visual behavior  falls into the uncanny valley

Monkey visual behavior falls into the uncanny valley

The Uncanny Valley Confuses Categories.

To categorize is a fundamental psychological process that happens automatically. The very act of perception involves an act of categorization. You haven’t completed the process of perception until you have categorized it and matched it up to other things that you know about. That is, “this object is the same as other things I know about and different from these other things.” We judge new things based on their similarity to previous things. And a human-like puppet easily confuses our judgment process. 

“It’s the process that grinds away constantly and generates much of our understanding and response to the world,” says Berkeley psychologist Dacher Keltner.  “First of all, it’s how do you categorize things? And that’s everything. Do I sleep with him or not? Is that a boy or a girl? Is that predator or prey? If you solve how this process works, then you solve how you know things. It’s how knowledge about the world is organized. It’s like the thread that is woven through everything in the mind.”

"Do I sleep with him or not? Is that a boy or a girl? Is that predator or prey? If you solve how this process works, then you solve how you know things."

Encountering a creepy android from the trough of the uncanny valley leads to conflicting perceptual cues. Is it human or not? It interferes with the automatic, System 1 processing that we rely on to get through our day.

The Uncanny Valley Is an Uncanny Cliff.

Lastly, there have been empirical studies that map the uncanny valley since Mori originally theorized it. Researchers show 11 different images as an object morphs from a thing to a human, and there is a decrease in liking midway through. 

It’s actually more like a cliff, because likability doesn’t fully recover:

Image source  here

Image source here

"The uncanny valley appears to be more of a cliff than a valley since even pictures of humans do not reach the level [of likeability] of pictures of toy robots. It has to be acknowledged that there is a small upwards trend again towards highly human-like entities, which results in a small valley. However, the most dominant feature in the graph is not the valley, but the cliff preceding it."  – Bartneck et al.

Takeaways for designers:

  • Avoid anything related to the uncanny valley by populating your experience with non-humanoid avatars. People prefer non-humanoids to humanoids
  • The uncanny valley doesn’t actually exist. It’s more like a cliff.  Once you get too humanoid, liking decreases dramatically and never fully returns. 

 

Additional info

 If you are interested in the uncanny valley, you’ll like Kimberley Voll’s interview on the fidelity contract. Listen to her full interview with @VoicesofVRKentBye here.

Bartneck, C., Kanda, T., Ishiguro, H., & Hagita, N. (2007). Is the Uncanny Valley an Uncanny Cliff? Proceedings of the 16 th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2007, Jeju, Korea, pp. 368-373.

H/T to @spencerlindsay for letting me quote him here. 

The Dark Side of Empathy

96 
  
    
  
 Normal 
 0 
 
 
 
 
 false 
 false 
 false 
 
 EN-US 
 JA 
 X-NONE 
 
  
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
  
  
  
    
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
   
 
 /* Style Definitions */
table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin:0in;
	mso-para-margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:12.0pt;
	font-family:Calibri;
	mso-ascii-font-family:Calibri;
	mso-ascii-theme-font:minor-latin;
	mso-hansi-font-family:Calibri;
	mso-hansi-theme-font:minor-latin;
	mso-fareast-language:EN-US;}
 
   A viewer watches the process of factory farming as part of the 360 VR documentary series iAnimal.  Photo Credit: Laika Magazine

A viewer watches the process of factory farming as part of the 360 VR documentary series iAnimal.  Photo Credit: Laika Magazine

Empathy is a real buzzword in the world of VR. VR filmmaker Chris Milk has said he aspires to build the ultimate empathy machine. The purpose of the iAnimal series seems to be to raise people's empathy for animals in order to activate their moral reasoning that animals deserve better treatment.

However, while empathy has a role in our lives, I don’t think it should be the end goal of any VR experience, because empathy does not necessarily lead to the fair treatment of others. One study on empathy for blind people woke me up to this.  Adam Waytz summarizes in HBR:

Participants were asked how capable they thought blind people were of working and living independently. But before answering the question, some were asked to complete difficult physical tasks while wearing a blindfold. Those who had done the blindness simulation judged blind people to be much less capable. That’s because the exercise led them to ask "what would it be like if I were blind?" (the answer: very difficult!) rather than "what is it like for a blind person to be blind?" 

People are so egocentric that even empathy tasks get reframed to be first person perspective. Humans are not terribly good at predicting how others feel. Yale psychology professor Paul Bloom wrote an entire book called Against Empathy that advocates using reason rather than empathy.

Consider learning about a ten-year-old named Sheri Summers who had a fatal disease and was waiting in line for treatment that would relieve her pain. Research participants were told that they could move her to the front of the line. When simply asked what to do, they acknowledged that she had to wait because other needy children were ahead of her.  But if they were asked to imagine what Sheri felt, they tended to choose to move her up, putting her ahead of children who were presumably more deserving.  Here, empathy was more powerful than fairness. – Against Empathy, p. 25

While there is certainly a role for empathy and compassion in life, it can sometimes narrowly focus us on the wrong details. And empathy does not appear to be a reliable way to activate people’s moral reasoning. 

Another example of the underbelly of empathy is its ability to give people a reason or motive to be hurtful toward others.  In a study where people were primed to empathize with someone (Person A), the research participants assigned Person A’s competitor (Person B) to eat more hot sauce as a punishment. It appears that creating an intervention that increases empathy toward Person A increased aggression toward Person B.

Rather than relying on empathy to guide fairness, ask people to make judgments based on logic and reasoning.  Gather data by actually speaking with people about their experience, rather than just imagining how they feel. 

Lastly, here’s a link to Paul Bloom speaking on how empathy blinds us to the long-term consequences of our decisions.

 
Additional reading:

Does Empathy Guide or Hinder Moral Action?  The New York Times.  December 29, 2016.  http://www.nytimes.com/roomfordebate/2016/12/29/does-empathy-guide-or-hinder-moral-action.

Waytz, Adam.  The Limits of Empathy.  Jan-Feb 2016.  https://hbr.org/2016/01/the-limits-of-empathy.

Bloom, P. (2017). Against Empathy. Bodley Head Limited.

Shamoon, Evan. Virtual Immersion into the Reality of Farm Animals. Feb 24, 2016.  http://www.laikamagazine.com/reality-animal-suffering/.