Schematic and Public Multitouch Social Interaction

Touchwall Demo from Joel on Vimeo.

Joel Johnson’s exclusive (on Vimeo?) video and interview with the folks at Schematic about their new touchwall shows them dealing with some interesting public multitouch issues. I hate the marketing crap that goes with it and the inevitable Minority Report reference (please, stop making that reference multitouch people), but the idea that what they’re really interested in is “the social interaction in front of the screen” is spot on.

Apart from the fun of playing with what looks like a giant iPhone screen, the key thing about large multitouch screens is that more than one person can use it at once. If it just replicates a bank of individual screens it’s missing the point of having one big one. Connecting people together in social play and interaction can be really engaging and it will be interesting to see what developers and designers explore in this area.

The other issue that they talk about in the video is how to solve the identity problem on such a device so that you don’t have to walk up to it (or “into it” as one of the interviewees says) and type in a log-in. RFID tags come to the rescue, which means the wall knows who you are as soon as you’re close enough to use it.

If we’re going to make comparisons to Minority Report, that screen was an individual experience operated alone by Cruise’s character. By contrast a multi-user multitouch screen feels to me to be much more Star Trek or James Bond to me and about using collaborative workspaces with the added layer of data feeds.

The MultiTouch Cell

multitouch-pictures-two-users-two-cells 1.jpg

MultiTouch have just launched the “world’s first modular multi-touch LCD screen that can be used to create large tables and wall screens.” They are LCD screens with multitouch capabilities built-in and can be stacked and configured into many different formats. So far they have built a 6m long wall, but theoretically it be as long as you like providing you have enough machines to power them.

The Cells are also very robust in terms of environment – they can work in bright light (as well as the dark of course) and tracks both fingers and hands for some more complex gestural interaction.

The system also uses their Cornerstone SDK, which should be launched soon which anyone should be use also outside of the Cells.

I would imagine we’ll be seeing quite a few of these around all sorts of public spaces soon. Take a look at the high quality video of it in action. No word on pricing yet though.

(Thanks to John from 3Eyes who has been working on it for the info.)

3M Interface – Reverse Multitouch

3m_interface.jpg

My brother, Matt, just e-mailed a link to this interface on the 3M website. Given the multitouch hype at the moment, it’s quite a clever little riff on the theme.

Basically it’s as if you are standing to the rear of a multitouch screen. Your mouse controls the finger movements of the person blurred out in the background and a selection does the old two-finger click-and-drag-larger movement that seems to have become a multitouch standard.

Where to now with multitouch?

I’ve been doing a bit of catching up with my blog reading recently and noticed Chris’s post on Pixelsumo about the HP giant Multi-Touch screen with the interface created by Darren David. Now that mutltouch has become the dish du jour, it’s time to start working out what to do with it, as Chris points out:

More and more developers are now creating multi-touch screens, without really asking WHY. Now that the technology is open and there are communities available to help, this takes away the initial learning curve. A criticism of all these kinds of projects for me is that the model of interaction doesn’t change. Han, iPhone, Surface and this project all do the two finger drag to stretch a photo, rotate it etc. Who needs to throw a photo around a screen? Unless the interface itself is a toy and a showcase, rather than concentrating on meaningful interaction or function.

Like all new technology, we are just getting to grips with it. It will be interesting to see where it goes next, or if it dies from lack of new creative ideas.

I suspect throwing photos around the screen is probably quite good fun actually, for a while. But he’s right in asking ‘where next?’. I’m really happy to see the cost of these systems falling and people like the NUI Group putting together open-source libraries and research into sensors and multi-touch. The easier it becomes for people to play wit these technologies, the more likely it is that some interesting ideas will be generated and/or found.

Until then, there is the danger that it’s all done for the sake of it, as this brilliant Surface parody shows:

(Thanks to Nic and Iain for that video link).

Multitouch City Wall

The CityWall is a new work by the Ubiquitous Interaction (Uix) research group in Helsinki as part of the IPCity project. It gathers tagged images and video from places like Flickr and YouTube as well as organising it into themes for events.

CityWall in action

One great thing about it is the fact that it is a mutlitouch interface that is out of the labs and ‘on the streets’. The multitouch system was developed by Uix’s John Evans and Tommi Ilmonen. The challenge, according to Evans, was that unlike Jeff Han’s very well-known work in this area, their screen had to work outside of the lab – “ambient light, dirty hands, dirty screen, day and night,” he says.

The payoff for all that hard work is how how easily people negotiate the interface without thinking about it. It’s a really nice combination of technology trends – multitouch with public installations and social content generation.

The YouTube video below shows how it all works, including the computer vision. You can view a nicer version on the CityWall site or download the MPEG4 Version (h.264 26MB).

Archetypes and Metaphors

There is an interesting piece over at Johnny Holland by Rahul Sen titled Archetypes and Their Use in Mobile UX. It’s probably worth reading it and coming back here, but the introduction gives you an idea of where he’s headed:

“Have you ever needed a user manual to sit on a good chair? Probably not. When we see a good chair, we almost always know exactly what to do, how to use it and what not to do with it. And yet, chairs are made by the thousands, and several challenge these base assumptions to become classics in their own right. The chair is one of the most universally recognized archetypes known to us. In light of recent events in the mobile realm, I believe that the stage is set to probe notions of archetypes in the mobile space.”

As does the last pull quote:

“Thinking in archetypes gives us a unique overview of interaction models and their intrinsic behavior patterns, making it possible to ask interesting what if questions and examine consequences.”

There is lots to like and he makes some great observations here, but hanging them onto the term “archetype” is problematic. Rahul gives a brief nod to the differences between metaphors and archetypes, but muddies rather than clarifies. This moment of slippage defeats the whole archetype argument, but if you replace the word archetype with metaphor in the piece, then it all makes great sense.

The reason why metaphors are so important to understand in interaction design is precisely because there are very few, if any, archetypes. It’s easy for us as savvy users and interaction designers to presume there are original ideas or symbols universally recognised by all, but they’re simply not. It’s the reason why so many people don’t ‘get’ interfaces that should be blindingly obvious. They don’t understand the mental model behind it, thus it’s not an archetype.

Metaphors are useful because they bridge this gap. One thing to note is that metaphors are not “analogies between two objects or ideas, conveyed by the use of one word instead of another,” as Rahul says. Those are similes. I’m not saying this to be grammatically pedantic, but because there is an important distinction. A metaphor isn’t saying “it is like“, but “it is“. It helps you understand a concept you don’t know by expressing it in the form of a concept you do know, not just saying it’s like the other one. Life is a journey, it’s not that life is like a journey.

An interaction design simile would say, “this file on the desktop is like a real paper document on your desk”. A metaphor is saying, “this file on your desktop (in fact, the icon of it) is a real file”. It makes a difference because it makes a difference to how we interact with those things and to the mental models we form. It makes a difference to how much we can stretch and/or break those metaphors. Delete your most precious file and decide whether it was like a file or really was one.

Lakoff and Johnson’s work on metaphors is essential to bring in here, because they demonstrate that our entire language and understanding of our experience in the world is based on embodied metaphors. When you start to pick apart language, you realise it’s all metaphors (such as “pick apart” – the metaphor being that language is a thing made up of other things that you can pull apart).

They also talk about how metaphors collapse into natural language without us thinking about them anymore, but they’re still metaphors. When we say we’re close to someone, we learn this metaphor from actually being physically close to someone (usually our mothers). Physical and emotion closeness are the same thing at that point. Later, we use the metaphor of being close to someone to express emotional closeness, but it because so commonplace and universally understood (in most languages) that we cease to perceive the metaphor anymore.

On the other hand, poetic metaphors, such as “the sun was a fiery eye in the sky”, are designed to make us perceive the metaphor and appreciate its discord or imagery. Most interface design is still on the poetry side of things, screaming out the metaphors, which is why they are far from being archetypes.

The interesting thing about multitouch devices is that the interface seems like it disappears. You feel like you are just interacting with the content in many cases, such as scaling or moving around digital photos that have never had a physical form. The interface is still there, of course. You’re not really stretching or pinching anything, you’re just making those movements with your fingers over a piece of glass, but the direct manipulate feeling that it affords tricks us enough. This still happens to a lesser extent in desktop metaphors – it really does feel like you have lost a file when it gets accidentally deleted, but actually it was never really a file, but a bunch of pixels on the screen pretending to look like a file and in fact just being a visual reference for a scattered set of magnetic impulses on a drive. Like theatre, we willingly suspend our disbelief in order to believe in the metaphor because it’s easier that way.

The strength of Rahul’s piece is in the various examples of something-centric “archetypes” that he gives and the “what if?” questions he asks about them. They’re insightful, but they’re just not archetypes by the definition he sets out. Ironically, having pointed out in a note right at the start of the article that he his not referring to Jungian archetypes, I think Rahul’s examples are much more closely related to Jung’s understanding of archetypes than the other definitions he refers to.

The Little Man in the Box

Hi from Multitouch Barcelona on Vimeo.

All of us anthropomorphise our machines, perhaps no more so than the car and the computer. Hi, A Real Human Interface from Multitouch Barcelona (an interaction design group that explores natural communication between people and technology) is a charming example of how we think about computers and interfaces from a human perspective.

Whatever we might know about the technology and how it works, we talk about the “server having some trouble” or our computers “having a bad day” or “going crazy”. We’re so biologically programmed for interaction to be with other beings, it’s very hard not to think of the little man in the box.

(Via @LukePittar and all the little people who run messages back and forth in the intertubes.)

Sixth Sense. Only Slightly Lamer than VR.

Pattie Maes is a smart woman. She’s behind some research projects that I wish I had been part of. But the above presentation at TED of Pranav Mistry’sSixth Sense‘ system gave me flashbacks to bad VR demos in the 90s and Steve Mann’s sad exploits as a cyborg.

800px-Wearcompevolution.jpg

Sometimes the focus on technology for the sake of technology just gets in the way of thinking about how people actually live. Any mobile device I carry around will have a screen and a camera, whether it be an iPhone or a projection onto my retina. There are ample uses and opportunities for augmented reality with these, so why would I want to carry around a tiny projector too?

In the ‘Sixth Sense’ set-up, I would need to keep my body still to keep the projected image from moving all over the place and I need to have some kind of tracking blobs on my fingers too. Let’s assume the devices are combined. Again, why the projector when I already have a screen? So that I can wave my arms about as a gestural interface? In public?

Like VR, the central paradox of ‘augmenting the senses’ is that the technology cuts back the senses. We’re not just heads floating around without bodies, we interpret the world through our entire bodies. Anything that reminds you that you’re using a mediating technology gets in the way of those senses and what you’re trying to do.

The success of multitouch interfaces is that they make the interface invisible. It’s still there of course – someone has to set up the metaphors of ‘pinching’, etc. – but when it works well, you don’t think about it. But they have to work well too – the slightest lag or misinterpretation of a drag as a click soon becomes a frustration.

Clever(ish) as it is, Sixth Sense doesn’t make much sense. I get a bit sad when I see these kinds of demos get such a big response at TED, because it’s an audience who should know better and should be in front of the curve, not behind it. This should be especially true from Maes, whose MIT page quotes her as saying “We like to invent new disciplines or look at new problems, and invent bandwagons rather than jump on them.”

(And Pranav should spend some time working on his MIT Web page).

Holographic Worlds and Gestural Interfaces

The Holodeck remains a fantasy for Trekkies and we’re still not yet jacked into The Matrix (or are we? Oooh.). Guys going to enormous lengths to build stuff for their girlfriends, on the other hand, has long been part of the human condition.

World Builder by Bruce Branit is about a guy who builds a holographic world for the woman he loves. There’s a reason it is holographic, which you find out when you get to the ending, so I won’t spoil it here. The film was shot in a day, but then took two years of post-production to finish off. Who says computers make things quicker?

The main reason for blogging it is because of some of the gestural interface elements in it. The overlay buttons and keypads are the usual fare and I remain unconvinced that jabbing at a floating holographic keypad button would be a useful UI approach, although it always looks good on screen. There are also some controls like spreading the fingers to enlarge and object and using the fingertips to rotate a virtual control knob that are already in use in gestural interfaces.

I’m not sure I have seen the idea of being able to pick up things like colours and textures on your fingertips and apply them to objects yet though in an existing multitouch interface. A few desktop applications use that kind of sticky mouse idea and 3D and 2D applications kind of use it with tools and colour/texture chips, but I still haven’t seen it all that smoothly done. Adobe seem to screw this up further and further with every release rather than making it easier. (Does CS really stand for ‘crappy shit’ rather than ‘creative suite’?)

The main issue with a gestural or multitouch interface would be keeping track of the identity of a particular finger tip once it has left the touch panel, it seems to me. But maybe someone has already solved this and it is in use – let me know if you know more.

(Thanks to one of my ex-students, Nico Marzian for mailing me the link).