gestures

How Social Placebos Boost Performance in VR

Normal 
 0 
 
 
 
 
 false 
 false 
 false 
 
 EN-US 
 JA 
 X-NONE 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
  
  
  
    
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
   
 
 /* Style Definitions */
table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin:0in;
	mso-para-margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:10.0pt;
	font-family:HelveticaNeue;
	mso-fareast-language:EN-US;}
 
   Stephen Curry, Andre Iguodala, and Kevin Durant doing a trust exercise – Ezra Shaw/Getty Images

Stephen Curry, Andre Iguodala, and Kevin Durant doing a trust exercise – Ezra Shaw/Getty Images

Are you interested in boosting player performance? Giving people the right social environment will help them achieve more.

NBA players who touch each other a lot during games (fist bump, high-five, head grab) cooperate more and outperform their prickly counterparts on other teams. Consider the effect of mimicking those high-fives, hugs, and team huddles in your VR experience – people will feel a high sense of trust and liking for others.

I’m interested in how small things influence actions and decisions. Why would a small social gesture like a high-five help a professional athlete perform better? The stakes are extremely high for them so you might imagine they are already maxed out on motivation to win. 

Like most things in life, there’s an evolutionary explanation. People who belonged to a strong tribe knew that they could take more risks. In the event of a negative outcome, there were people who could care for you. These small things like fist bumps signal strong social ties.

“I’m not the guy who’s afraid of failure. I like to take risks, take the big shot and all that.” 
    – Steph Curry

Consider how social placebos would change a VR game like Surgeon Simulator: Meet the Medic by Bossa Labs. You are a surgeon in the game and have to perform tasks like heart transplants to save the patient. This is a gaming experience where having people around could boost a player’s performance. Having another person watching you would make you move faster.*

Screen Shot 2016-12-30 at 12.32.29 PM.png

What are the limitations of the social placebo?

 Having an audience when doing a complicated task for the first time could sabotage performance. But, if it’s a straightforward action that doesn’t require any particular skill, having supporters would likely help. And it can be a complex activity, as long as the user has already rehearsed. 

 Also, the encouragement should probably come from the person’s in-group.

Normal 
 0 
 
 
 
 
 false 
 false 
 false 
 
 EN-US 
 JA 
 X-NONE 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
  
  
  
    
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
   
 
 /* Style Definitions */
table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin:0in;
	mso-para-margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:10.0pt;
	font-family:HelveticaNeue;
	mso-fareast-language:EN-US;}
 
   Kobe Bryant congratulating Steph Curry on a trifecta

Kobe Bryant congratulating Steph Curry on a trifecta

Does the social placebo work when you are surrounded by AI avatars instead of human avatars? 

 Most likely. It might not last as long or be as effective compared to being around humans you know well and like, but a high five from an AI is likely better than no high five at all.

 How many viewers are optimal?

 It really depends on your goal. One person might be enough. Building a stadium of AI spectators might be overkill, but athletes do get a buzz from those national anthems, pre-game rituals, and cheering fans. 

Takeaway for VR designers:

  • Usage will increase if you build in social placebos. People will be perform at a higher level and have more fun.

*Human runners go faster when they are under observation than when they are solo.  Same effect in cockroaches.  Those pests fun faster when other cockroaches are watching them.

 

The Neuroscience of Gestures

96 
  
    
  
 Normal 
 0 
 
 
 
 
 false 
 false 
 false 
 
 EN-US 
 JA 
 X-NONE 
 
  
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
  
  
  
    
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
   
 
 /* Style Definitions */
table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin:0in;
	mso-para-margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:12.0pt;
	font-family:Calibri;
	mso-ascii-font-family:Calibri;
	mso-ascii-theme-font:minor-latin;
	mso-hansi-font-family:Calibri;
	mso-hansi-theme-font:minor-latin;
	mso-fareast-language:EN-US;}
 
  “How can you tell what these people are talking about?”

“How can you tell what these people are talking about?”

I’d like to persuade you that gestures are a fundamental building block of human language and thought. This begins a series of blog posts on gestures and how physical movement in VR & AR affects cognition.

Part one of this series will deal with why gestures provide a shortcut to human thought. 

But first, on the tech front:
Devices to capture small hand gestures are already available (like Microsoft Hololens) and more are underway.  Project Soli at Google can use radar to track micro-motions and twitches. The radar from the device senses how the user moves his hands and can interpret the intent. Link to the full Project Soli video here.

Why are gestures powerful shortcuts to cognition?

I’m reposting an article from Scientific American here that answers “Why is talking with gestures so much easier than trying to talk without gesturing?”  Psychology professor Michael P. Kaschak responds:

A person in a fit of rage may have trouble verbalizing thoughts and feelings, but his or her tightly clenched fists will get the message across just fine.

Gesturing is a ubiquitous accompaniment to speech. It conveys information that may be difficult to articulate otherwise. Speaking without gesturing is less intuitive and requires more thought. Without the ability to gesture, information that a simple movement could have easily conveyed needs to be translated into a more complex string of words. For instance, pointing to keys on the table and saying, ‘The keys are there,’ is much faster and simpler than uttering, ‘Your keys are right behind you on the countertop, next to the book.’

The link between speech and gesture appears to have a neurological basis. In 2007 Jeremy Skipper, a developmental psychobiologist at Cornell University, used fMRI to show that when comprehending speech, Broca’s area (the part of the cortex associated with both speech production and language and gesture comprehension) appears to ‘talk’ to other brain regions less when the speech is accompanied by gesture. When gesture is present, Broca’s area has an easier time processing the content of speech and therefore may not need to draw on other brain regions to understand what is being expressed. Such observations illustrate the close link between speech and gesture.

Takeaways for VR/AR Designers:

  • People process information more deeply when they are gesturing
  • Verbal areas of the brain are more active when speech accompanies gestures 
  • The tech exists for picking up human micro-gestures

How to Use Gestures to Learn Faster

Gestures make it easier to learn.  When people are speaking and gesturing at the same time, they process information better.  From New York Magazine

"University of Chicago psychologist Susan Goldin-Meadow and her colleagues have found that when toddlers point at objects, they’re more likely to learn the names for things; that for adults, gesturing as you try to memorize a string of numbers prompts better recall; and that when grade-schoolers gesture, they’re better at generalizing math principles.

The authors found that the students in both gesture conditions were more likely to succeed on follow-up generalization problems, which required understanding the underlying principle beneath the first problem and applying it in novel situations. It’s a case study in how gesture 'allows you a space for abstraction,' Goldin-Meadow says. 'You’re not as tied to the particulars of an item, of a problem, a word, or an experience.' You’re not just talking with your hands, in other words; you think with them, too.

Researchers haven’t yet pinned down exactly how this connection works, but Goldin-Meadow believes part of it is that gestures reduce what psychologists call 'cognitive load,' or the amount of mental energy you’re expending to keep things in your working memory."


Gestures are a good illustration of how humans think with more than just our brains.  The brain can process more information with gestures than without them, which makes them pretty fundamental to human capabilities. 


Takeaways:

  • Users moving their hands inside of a digital experience has cognitive consequences
  • Giving users alternative, embodied ways to learn information will help them retain concepts
  • Gestures are effective because they allow working memory to offload effort

 

Inuit "Snow" is the Future of VR

In 1978, Roz Chast published her first cartoon in The New Yorker, Little Things.  It featured imaginary widgets with their nonsense labels.  Chast’s debut is a satire of the arbitrary, nonsensical nature of the words we assign to novelty.

In the world of VR, people are creating radically new experiences and have the opportunity to name and label, which have consequences on cognition.  What starts out as unfamiliar will become familiar.  Newly coined terms have become essential tools for thriving in new contexts. Consider the use of “mouse”, “dongle”, “spam”, “ping”, “meme”, “hashtag”, “lulz” online.  

Smart labeling is one of the most important (and underestimated!) aspects of designing a successful VR/AR experience.  I would argue that if you’re designing VR/AR experiences, failure to effectively label could imperil your whole project.

Access to language and labels changes cognitive processing. In VR there is the opportunity to create whole worlds with novel objects and new labels.  Designers should be mindful of the cognitive effort that this creates for people inside of their experience.  By giving people access to language and labels, they can go through an experience more quickly and easily.  Consider two examples from the world of VR / AR.

The Microsoft Hololens teaches people the "Bloom" hand gesture. Bloom is a special system gesture that is used to go back to the Start Menu.  It's a common word that most English speakers know and it makes it easy to remember as a navigation tool.  In contrast, Aperture Robot Repair (an experience made by Valve for the HTC Vive) gives very generic instructions for people to place their controllers in a certain area to charge them, but it can take some users (like me!) a long time to actually figure out what I'm actually supposed to do to get to the next part of the experience.  

 

MS Hololens: To do the bloom gesture, hold out your hand, palm up, with your fingertips together. Then open your hand.

MS Hololens: To do the bloom gesture, hold out your hand, palm up, with your fingertips together. Then open your hand.

Aperture Robot Repair gives you vague instructions: "Charge your multi-tools at the charging station."

Aperture Robot Repair gives you vague instructions: "Charge your multi-tools at the charging station."

It's not that Hololens is good or Aperture Robot Repair is bad.  It's just a different experience for people.  When designing for new mediums, consider how much information and guidance users should get inside of their experience.  Perhaps developing novel objects such as the “Chent” or the “Spak” is the right choice for your VR experience, but it will slow down your users and cause more effort, especially if you only give them one path to learning it (vision) instead of multiple pathways in the brain (language and vision). Let’s consider what is the right amount of information to help people learn and track information inside of a digital experience.  

ACCESS TO LABELS SPEEDS PROCESSING

Language speeds cognitive processing and reaction times.  That means that if you want to introduce new objects, make access to language and labels easy. I’m saying “access” to labels because designers don’t have to specifically label a red, round fruit with the word “apple.”  However, they can use objects that are easy for people to label with their own mental resources.  The following excerpts are from Drunk Tank Pink:

"The notion of that labels change how we see the world predates the blue-matching experiment by almost eighty years.  In the 1930s, Benjamin Whorf argued that words shape how we see objects, people, and places.  According to one apocryphal tale, the Inuit people of the Arctic discern dozens of types of snow because they have a different words for each type.  In contrast, the rest of the world has perhaps several words - like snow, slush, sleet, and ice.  The story isn’t true (the Inuit describe snow with roughly the same number of words as [non-Inuit] do), but it paints a compelling picture: it’s much harder to convey what’s in front of you if you don’t have words to describe it.  Young children illustrate this difficulty vividly as they acquire vocabulary - once they learn to call one four-legged creature with a tail a “dog,” every four-legged creative with a tail is a dog.  Until they learn otherwise, cats and ponies share the same features, so they seem just as doggish as real dogs.”

There was a clever experiment that tested this phenomenon.  Due to linguistic differences between English and Russian, cognitive scientists were able to parse how the ability to label a color with specificity affected people’s reaction time. 

“Colors and their labels are inextricably linked.  Without labels, we’re unable to categorize colors - to distinguish between ivory, beige, wheat, and eggshell and to recognize that broccoli heads and stalks are both green despite differing in tone. To show the importance of color labels, in the mid-2000s, a team of psychologists capitalized on a difference between color terms in the English and Russian languages.  In English, we use the word blue to describe both dark and light blues, encompassing shades from pale sky blue to deep navy blue.  In contrast, Russians use two different words goluboy (lighter blue) and siniy (darker blue).  

The researchers asked English-speaking and Russian-speaking students to decide which of the two blue squares matched a third blue target square on a computer screen.  The students performed the same task many times.  Sometimes both the squares were light blue and sometimes both were dark blue, and sometimes one of them was light blue and the other was dark blue.  When both fell on the same side of the blue spectrum - either light or dark blue - the English and Russian students were equally quick to determine which of the squares matched the colors of the third target square.  But the request was quite difference when one of the colored was lighter blue (or goluboy according to the Russian students) and the other was siniy (darker blue).  On those trials, the Russian students were much quicker to decide which square matches the color of the target square."

While the English students probably looked at the target blue square and decided that it was “sort of lightish blue” or “sort of darkish blue” their labels were never more precise than that.  They were forced to decide which of the other blue squares matched that vague description.  The Russian students were at a distinct advantage, they looked at the square and decided that it was either goluboy or siniy.  Then all they had to do was look at the other squares and decide which one shared the label.  Imagine how much easier the task would have been for the English students if they had been looking at one blue square and one green square; as soon as they determined whether the target square was blue or green, the task was trivially easy.  In fact, an experiment published one year later showed that Russian students perceive dark blue to be just as different from light blue as the color green is from the color blue to English students.  When Russian student located a dark blue square wishing an array of light blue squares, part of the visual field within their brain light up to a signal that they had perceived the odd square.

The same brain areas were much less active when English students look at the same array of squares - except when the odd square was green within an array of blue squares. When the colors had different labels for the English students, their brain responded like the brains of the Russian students. 

 

In comparison with hard-to-name colors, perceptual discrimination of easy-to-name colors elicited unique activation in the posterior portion of the left superior temporal gyrus, left inferior parietal lobule, left precuneus, and left postcentral gyrus were statistically stronger for easy-to-name colors. No regions showed stronger activity for the discrimination of the hard-to-name colors.

In comparison with hard-to-name colors, perceptual discrimination of easy-to-name colors elicited unique activation in the posterior portion of the left superior temporal gyrus, left inferior parietal lobule, left precuneus, and left postcentral gyrus were statistically stronger for easy-to-name colors. No regions showed stronger activity for the discrimination of the hard-to-name colors.

We also know that the Russian students relied on these category names, because their advantage of the English students disappeared altogether when they were asked to remember a string of numbers while they were performing the color discrimination task.  Since their resources for processing language were already occupied with the task of repeating the number string, they weren’t able to rehearse the names of the colors. Without the aid of linguistic labels, they were forced to process the colors just like the English-speaking students. This elegant experiment shows that color labels show how people see the world of color. The Russian and English students and the same mental architecture - the same ability to perceive and process the colors in front of them - but the Russians had the distinct advantage of two labels where the English students had just one.  This example is striking because it shows that even our perception of basic properties of the world, like color, is malleable in the hands of labels.  

Interestingly, the researchers didn’t have to actually label the squares with words in order for people to activate the language centers of their brain.  And when they put people under cognitive load by asking them to remember a string of numbers, the Russian-speakers cold not access the linguistic labels and their performance decreased to the same baseline of the English speakers.   

Failure to use language and labels in an effective way can sabotage an experience in VR/AR. Try working backwards from the experience that you want your user to have and consider what their level of knowledge will be when they arrive to your experience. 

People use language as part of perception.  Language affects patterns of brain activation.  In my next post, I'm going to discuss language metaphors because they are one of the most important tools of knowledge acquisition that humans possess! All VR/AR experience designers should command metaphors to immerse people in an experience.  

 

Further Reading

Roz Chast has published over 1200 cartoons in the New Yorker since 1978.

Alter, A. (2013). Drunk tank pink: And other unexpected forces that shape how we think, feel, and behave. Penguin. Pages 27-29.  

Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, A. R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the National Academy of Sciences, 104(19), 7780-7785.

Tan, L. H., Chan, A. H., Kay, P., Khong, P. L., Yip, L. K., & Luke, K. K. (2008). Language affects patterns of brain activation associated with perceptual decision. Proceedings of the National Academy of Sciences, 105(10), 4004-4009.