Regular readers will know I’m pretty cynical about VR and I’ve never been much of a fan of the CAVE system. The last time I used one at iCinema I was treated to an interface that looked like it was designed in 1989 and a headache from the glasses.
The above video is from IDEO’s trip to WATG’s labs, where they have an iCube set up. It’s pretty entertaining to see Dave lose his balance as he stands on the edges of virtual walls and it’s clearly working on a fairly immersive level in a way I have never experienced in any VR that I have tried. The reason, usually, is that the equipment and the environment are so imposing that you can’t really ever engage your willing suspension of disbelief and immerse yourself. That’s the irony of immersive VR systems.
I think part of the reason this is working well here is because WATG are hospitality architects, so they know a thing or two about making compelling environments and have some decent 3D chops. The landscape Dave is wandering around in looks at least as good as Unreal Tournament 2003 instead of Manic Miner.
It also helps that the headset is rather smaller these days, though the joystick device that the woman guiding him uses looks like a cordless power drill. It’s hard to tell what this would really be like when the novelty wears off.
I can see it’s use in terms of an architectural projects and, maybe, a product design, but I’m still wary that you would get much of a real feel for either of those things from the VR version. VR still feels like a technology waiting for a use rather than a useful technology. (Check out the beginning of this video where she’s standing lost and forlorn inside a Windows desktop – this would be my nightmare).
One last thing, I wish IDEO wouldn’t tag it “serious play” as if they need to justify using the word play. I know they use it to reference Tim Brown’s talk, but play is play and it’s a legitimate as anything else.
Pattie Maes is a smart woman. She’s behind some research projects that I wish I had been part of. But the above presentation at TED of Pranav Mistry’s ‘Sixth Sense‘ system gave me flashbacks to bad VR demos in the 90s and Steve Mann’s sad exploits as a cyborg.
Sometimes the focus on technology for the sake of technology just gets in the way of thinking about how people actually live. Any mobile device I carry around will have a screen and a camera, whether it be an iPhone or a projection onto my retina. There are ample uses and opportunities for augmented reality with these, so why would I want to carry around a tiny projector too?
In the ‘Sixth Sense’ set-up, I would need to keep my body still to keep the projected image from moving all over the place and I need to have some kind of tracking blobs on my fingers too. Let’s assume the devices are combined. Again, why the projector when I already have a screen? So that I can wave my arms about as a gestural interface? In public?
Like VR, the central paradox of ‘augmenting the senses’ is that the technology cuts back the senses. We’re not just heads floating around without bodies, we interpret the world through our entire bodies. Anything that reminds you that you’re using a mediating technology gets in the way of those senses and what you’re trying to do.
The success of multitouch interfaces is that they make the interface invisible. It’s still there of course – someone has to set up the metaphors of ‘pinching’, etc. – but when it works well, you don’t think about it. But they have to work well too – the slightest lag or misinterpretation of a drag as a click soon becomes a frustration.
Clever(ish) as it is, Sixth Sense doesn’t make much sense. I get a bit sad when I see these kinds of demos get such a big response at TED, because it’s an audience who should know better and should be in front of the curve, not behind it. This should be especially true from Maes, whose MIT page quotes her as saying “We like to invent new disciplines or look at new problems, and invent bandwagons rather than jump on them.”
(And Pranav should spend some time working on his MIT Web page).
It’s the gloves again.
Part of me wants to believe G-Speak is really is a fantastic “spatial operating environment”. The mouse and keyboard are awkward, clunky and out-dated with plenty of problems and it’s time for a change. G-Speak is about freeing ourselves from those shackles, about working in space across multiple screens.
I wanted to scream when I saw the tired reference to Minority Report, but it turns out that one of the team, John Underkoffler, was the science advisor on Minority Report, so they can get away with it given they he ripped off his own ideas for the film.
The video and some of the interaction looks great.
Except for the gloves.
It’s the gloves (and the headset) that made VR so lame. That and being tethered to a machine, so at least that part is no more.
Yet regardless of how much of a paradigm-shifting breakthrough g-speak is, I can’t see people donning the dorky gloves every time they want to work. I can’t see many people devoting that much space to one person’s screens either and I can’t see many people having the stamina to stand with their arms out-streched and wave them about all day. A two-hour yoga class is hard enough.
Don’t get me wrong, I’d love to have a go and experience it for myself. I’m sure there is a whole of interesting interaction going on there.
I really want to be wrong about this. I really want to know that it’s not just a technical triumph from a group of talented tech guys whose blog has the most heinous URLs. I really do.
I just don’t want to have to smell the gloves.
Here’s an interesting video of inverting the Wiimote and infrared sensors to create a surprisingly realistic optical illusion for a single user:
A lot of interaction and GUI design is about optical illusion and willing suspension of disbelief, something usually talked about in fiction. It’s tempting to try and make things ‘for real’ sometimes, when actually a fake or a bit of smoke and mirrors works better.
Driving games aren’t really using realistic physics, they’re usually souped up to make things more exciting. Those aren’t really files and folders on your desktop there and this isn’t really a page. Of course you know that in the back of your mind, but you willingly ignore it in order to utilise the illusion.
When you try and make a metaphor real, you get all caught up in knots sometimes and lose the benefits of the abstracted version. Bumptop is a classic example of this – by mimicking a physical desktop you end up with all the same hassles, such as too little space for all the junk. I wrote more about this at length before.
What’s interesting about Johnny Lee’s approach above is that it’s so low-tech. Another example of the openness and cheapness of the Wiimote producing innovation. The other aspect is that it doesn’t really require much in the way of a headset, unlike other VR systems whose kit only serves to constantly break the suspension of disbelief.
Although plenty of research grant applications seem to thrive on making things much more complicated than they need to be, it is generally good to remember the KISS principle.
Can you think of some other good examples of these kinds of simple illusions in interface/interaction design?
[tags]interactivity, VR, Wii, tracking[/tags]