virtual reality

The Unbearable Automaticity of Ghostbusters VR

Hello - quick announcement that I'm giving my first talk in VR on Friday.  Event will be in High Fidelity on Friday at 2pm Pacific. 

The Behavioral Science of High Fidelity: I will be speaking on how open source changes people's psychology, how social norms get established, and why physical movement increases shared presence in VR. Bring your questions about the social science of VR to the Zaru Theater June 2nd at 2pm Pacific.

You can connect using VR or your PC. Download High Fidelity here:

The wax statues were over the uncanny cliff.

The wax statues were over the uncanny cliff.

I was in NYC to give a talk about the behavioral science of immersive technology. While I was there, I went to The Void’s Ghostbusters Dimension Hyper-Reality and a production of Sleep No More. In my next blog post, I’ll write about Sleep No More. First up is Ghostbusters (unrelated to the PSVR game). Spoilers!

High-end VR for the general consumer

The Void’s Ghostbusters Dimension is a VR experience offered inside of Madame Tussaud’s Wax Museum, which is located in the Times Square area of New York City. If you haven’t been to Times Square before, this should give you an idea of what to expect around there.

I should have known given the hyper-touristy nature of Times Square that The Void’s Ghostbusters wasn’t for me. Anything designed for tourists is not going to appeal to someone with specialized knowledge. The Void has made a terrific VR experience for the general consumer, but as someone who has logged hours on a Vive, I was underwhelmed.

It felt like a cinematic experience. The art and animation was outstanding. However, it wasn’t fun, especially when you compare it to a something like Epic Games’ Robo Recall.

Here’s The Void’s promotional video. You can get a sense of the the art and animation.

Want more evidence that Ghostbuster is built for a general audience? Look who they are promoting testimonials from:

Time, BBC, Wired, FastCompany, Forbes, Tech Insider, and Popular Mechanics.

Have you ever looked for an entertainment recommendation from Popular Mechanics?? That’s where people go to compare reliability ratings between Hondas and Toyotas.

No risk and no fun

Ghostbusters felt like a cinematic storytelling experience only. I felt like I had zero autonomy or agency in the experience. There was no potential to fail. Risk is part of what makes games feel fun. Ghostbusters has scrubbed out all threats and automated the experience.

It seemed like there was no interactivity between me and the ghosts that I was shooting.

o My avatar was never damaged by the projectiles that the ghosts threw at me.

o It felt like the ghosts were on a timer and were going to be defeated no matter how much or how little I shot them.

I was shadowed by an employee during the experience.

o He told me before I put the headset on that if I ever needed help I could raise my hand.

o At one point when I had stopped to look at some animations, he lifted the headphone off of my ear and told me to proceed to my right. He assumed that I was stuck.

I was aware of there being other consumers waiting for me to complete the experience so that they could take a turn.

o I suspect that there’s no risk in Ghostbusters because they need to cycle consumers through to keep it profitable. If people need multiple attempts to defeat the boss, that would take too long. Plus, some consumers won’t enjoy having to make multiple attempts.

o Contrast this with Schell Games’ I Expect You to Die, a game with no time constraints. The creators expect you to need multiple attempts to complete a level. And each level gets subsequently more challenging to solve.

It seems like they made a safe bet and decided to provide a terrific viewing experience. I wouldn’t call it a gaming experience.

But they had an elevator

There were a couple things inside of the experience that you wouldn’t get in a normal HTC Vive session.

One, they had an elevator simulation that was cool. You stepped onto a platform and it shakes and rumbles. The HMD animation makes it appear as though you are going up several floors and being approached by creepy ghosts. You are sprayed with water when a ghost touches your face.

There’s also a rickety bridge that you have to cross, which messes with your balance.

Lastly, when the Marshmallow Man is defeated, you have the scent of roasted marshmallow around you.

I see the potential for VR arcades. There is an opportunity to build in robust multi-sensory experiences in an arcade that you wouldn’t have at home. But, my take is that arcades are unlikely to appeal to someone like me who already has unfettered Vive / Rift / PSVR access.

Who wants vanilla?

Compared to a recent VR release like Robo Recall, Ghostbusters is vanilla in comparison.

In Robo Recall, you go at your own pace. I died and respawned. I was confronted with choices constantly. It’s highly immersive — I got surprised by the robots. I punched the desk in my VR room until my hand was numb. I was really into shooting those robots.

Ghostbusters is a pretty flat experience in comparison. It’s over in ten minutes. I made zero choices. I was trailed by employees making sure that I stayed on track. They would have stopped me before I hurt myself.

Takeaways for designers

  • Consider who your experience is for. Is it going to be for a VR savvy audience? Or, readers of Popular Mechanics?
  • Chose the level of automaticity that fits your goals.
  • Kent Bye has created an Elemental Theory of Presence to piece together the different aspects of VR (film-making, gaming, emotional resonance, etc) and how they best fit together.
  • I would recommend this be used in conceptual discussions of all VR development so at least you are clear on what your goals are.

Want to create something similar to Ghostbusters? Space is not your primary constraint. I believe that the room I was in was only about 20 feet by 30 feet total.

In my next post, I’ll give a rundown of Sleep No More, a theatrical production of MacBeth and takes place in a four-story building in NYC. The audience trails the actors around the set for three hours so I’ll talk about the implications for immersive storytelling.

Facebook Spaces: The only rule is that it has to work

Facebook released the beta version of Spaces today.  After using it for an hour today, here are are my initial comments.  This is written for people who haven't had a chance to use Spaces yet.  


You start by logging into your Facebook account and customizing an avatar.  Other people see your name and Facebook profile picture above your avatar. 

My biggest issue was how to select things.  I’ve never had a good handle on using the Oculus Touch controllers.  Someone in my office had to coach me to extend my finger and actually touch things and/or extend finger and use the x button.  Even after the coaching, it took me multiple tries to get it right.  

It was especially difficult at the beginning when I was trying to change my skin tone and couldn’t select my skin.  I kept getting a beard instead  

Hmmm, what type of beard do I want?  

Hmmm, what type of beard do I want?  


The primary tools are a mirror, a selfie-stick and a pencil you can use to write in the air.  

There’s also things like stickers and pre-made drawings that you can use to decorate your environment.  Our space got a little cluttered with graffiti so we left rather than figure out how to clear it.  

Big Screen Beta has acclimated me to being able to share videos and monitor a screen when I’m hanging out in VR. Spaces already feels dated because it doesn’t offer that feature.  


Freeze and crash after I pushed "video call."

Freeze and crash after I pushed "video call."

Here a short list of the things that didn’t work for me

1. My friend using a Vive was frozen and I could only see him blink and move his mouth.  I couldn’t see anything else that he did or made.  

2. We could only get 3 people in our space at a time

3. I had transparent Touch fingers and everyone around me had opaque hands

4. When I opened up the menu to change my appearance, everything else in the environment was frozen

5. To change my t-shirt color or my eyewear accessories were entirely different menus to the side of the mirror

6. The setting of the park was unappealing to me.  In the park, I felt like a creepy person spying on these couples having a day out.  I’d prefer to be solo in a beautiful environment.  

7. Making a Facebook Messenger call crashed the application…twice.  

8. When Facebook Messenger did connect, only I could see my friend on the “tablet.”  No one else connected via VR could see or hear him.  

9. I couldn’t find a way to change my the default environment (park, campsite, etc.)

10. I didn’t know how to change the audio from the Oculus headset to the computer speakers so that everyone else in the room could hear my conversation.  There were four other people in the room with me and I had to repeat what I heard for their benefit.

11. When the anyone in the room with me spoke, the lips of my avatar moved.  


Despite the technical challenges, I had fun.  But why?

Was it because Eva Hoerth was there?  Eva could probably cheer me up at my grandmother’s wake.  Overall, it validates a belief that private VR spaces where you hang with friends will be appealing, but that’s hardly a new idea.  

Was it fun because everyone I interacted with was acting a bit silly?  Are goofballs the key to social VR’s success? 

The novelty of it made the experience fun.  But, will people quickly habituate and the novelty decrease?  Perhaps, but I typically hang out in the exact same environment every time I hang out with friends.  We spend time in my living room - but it’s often different people and we do a variety of things.  

Overall, it’s a good use of the Facebook infrastructure, leveraging friend networks, Messenger calls, sharing on your feed, but it still feels very beta.  



The Neuroscience of Gestures

 /* Style Definitions */
	{mso-style-name:"Table Normal";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
  “How can you tell what these people are talking about?”

“How can you tell what these people are talking about?”

I’d like to persuade you that gestures are a fundamental building block of human language and thought. This begins a series of blog posts on gestures and how physical movement in VR & AR affects cognition.

Part one of this series will deal with why gestures provide a shortcut to human thought. 

But first, on the tech front:
Devices to capture small hand gestures are already available (like Microsoft Hololens) and more are underway.  Project Soli at Google can use radar to track micro-motions and twitches. The radar from the device senses how the user moves his hands and can interpret the intent. Link to the full Project Soli video here.

Why are gestures powerful shortcuts to cognition?

I’m reposting an article from Scientific American here that answers “Why is talking with gestures so much easier than trying to talk without gesturing?”  Psychology professor Michael P. Kaschak responds:

A person in a fit of rage may have trouble verbalizing thoughts and feelings, but his or her tightly clenched fists will get the message across just fine.

Gesturing is a ubiquitous accompaniment to speech. It conveys information that may be difficult to articulate otherwise. Speaking without gesturing is less intuitive and requires more thought. Without the ability to gesture, information that a simple movement could have easily conveyed needs to be translated into a more complex string of words. For instance, pointing to keys on the table and saying, ‘The keys are there,’ is much faster and simpler than uttering, ‘Your keys are right behind you on the countertop, next to the book.’

The link between speech and gesture appears to have a neurological basis. In 2007 Jeremy Skipper, a developmental psychobiologist at Cornell University, used fMRI to show that when comprehending speech, Broca’s area (the part of the cortex associated with both speech production and language and gesture comprehension) appears to ‘talk’ to other brain regions less when the speech is accompanied by gesture. When gesture is present, Broca’s area has an easier time processing the content of speech and therefore may not need to draw on other brain regions to understand what is being expressed. Such observations illustrate the close link between speech and gesture.

Takeaways for VR/AR Designers:

  • People process information more deeply when they are gesturing
  • Verbal areas of the brain are more active when speech accompanies gestures 
  • The tech exists for picking up human micro-gestures