LeapMotion is a USB device now available for pre-order that “creates a 3D interaction space of 8 cubic feet to precisely interact with and control software on your laptop or desktop computer.” According to the website
The Leap senses your individual hand and finger movements independently, as well as items like a pen. In fact, it’s 200x more sensitive than existing touch-free products and technologies. It’s the difference between sensing an arm swiping through the air and being able to create a precise digital signature with a fingertip or pen.
The video embedded above shows it off pretty nicely. The device itself is about the size of the power brick that comes with the Mac Minis or the AppleTV (or used to). It’s, not coincidentally, similarly designed, so it’s not going to look like some ugly chunk of plastic and LEDs on your desk. This is, I think, not to be underestimated if you are asking people to invest in a new kind of interface that will, indeed, sit on their desk to be stared at all day. People are pretty pernickety about what goes on their desk.
When I say invest in, I’m really talking about time. The device itself is pretty cheap at $69.99. I can see this being a bonanza for people making interactive installations and performative interfaces (which is why I came across it, thanks to Joel Gethin Lewis).
It looks like LeapMotion is responsive and accurate, but there is still the question of holding your hands in front of you all day. With a desktop version, I foresee an elbows-resting-on-the-table-while-wiggling-the-hands mode of usage. Perhaps it’s time to invest in an elbow rest Kickstarter project.
Photosketch: Internet Image Montage provides a simple way to make image composites by doodling a picture, adding labels and then letting the engine scour the Internet for suitable photos. Once it has found the most appropriate matches, it composites them together.
I can see lots of awful e-cards and Powerpoint presentations coming out of this, but it would be very useful for putting together prototype sketches for installations and services and it is a pretty remarkable bit of technology.
Great to see magneticNorth’s new website live. Brendan gave me a sneak peek of it yesterday and I love it.
The navigation is very playful and intuitive. Actually it is intuitive because it is playful. You basically scribble a doodle and this makes a mask into which a piece from their portfolio opens. You can then click on that item to view more info about the work or simply make another scribble to look at a new piece. The navigation across the top is a history that you can move back and forth through or reset.
What is nice about the whole thing is that you just don’t have worry about doing anything ‘right’. You can scribble any shape and you can scribble over the top of other scribbles and everything automagically sorts itself out.
Sometimes the focus on technology for the sake of technology just gets in the way of thinking about how people actually live. Any mobile device I carry around will have a screen and a camera, whether it be an iPhone or a projection onto my retina. There are ample uses and opportunities for augmented reality with these, so why would I want to carry around a tiny projector too?
In the ‘Sixth Sense’ set-up, I would need to keep my body still to keep the projected image from moving all over the place and I need to have some kind of tracking blobs on my fingers too. Let’s assume the devices are combined. Again, why the projector when I already have a screen? So that I can wave my arms about as a gestural interface? In public?
Like VR, the central paradox of ‘augmenting the senses’ is that the technology cuts back the senses. We’re not just heads floating around without bodies, we interpret the world through our entire bodies. Anything that reminds you that you’re using a mediating technology gets in the way of those senses and what you’re trying to do.
The success of multitouch interfaces is that they make the interface invisible. It’s still there of course – someone has to set up the metaphors of ‘pinching’, etc. – but when it works well, you don’t think about it. But they have to work well too – the slightest lag or misinterpretation of a drag as a click soon becomes a frustration.
Clever(ish) as it is, Sixth Sense doesn’t make much sense. I get a bit sad when I see these kinds of demos get such a big response at TED, because it’s an audience who should know better and should be in front of the curve, not behind it. This should be especially true from Maes, whose MIT page quotes her as saying “We like to invent new disciplines or look at new problems, and invent bandwagons rather than jump on them.”
(And Pranav should spend some time working on his MIT Web page).
The Holodeck remains a fantasy for Trekkies and we’re still not yet jacked into The Matrix (or are we? Oooh.). Guys going to enormous lengths to build stuff for their girlfriends, on the other hand, has long been part of the human condition.
World Builder by Bruce Branit is about a guy who builds a holographic world for the woman he loves. There’s a reason it is holographic, which you find out when you get to the ending, so I won’t spoil it here. The film was shot in a day, but then took two years of post-production to finish off. Who says computers make things quicker?
The main reason for blogging it is because of some of the gestural interface elements in it. The overlay buttons and keypads are the usual fare and I remain unconvinced that jabbing at a floating holographic keypad button would be a useful UI approach, although it always looks good on screen. There are also some controls like spreading the fingers to enlarge and object and using the fingertips to rotate a virtual control knob that are already in use in gestural interfaces.
I’m not sure I have seen the idea of being able to pick up things like colours and textures on your fingertips and apply them to objects yet though in an existing multitouch interface. A few desktop applications use that kind of sticky mouse idea and 3D and 2D applications kind of use it with tools and colour/texture chips, but I still haven’t seen it all that smoothly done. Adobe seem to screw this up further and further with every release rather than making it easier. (Does CS really stand for ‘crappy shit’ rather than ‘creative suite’?)
The main issue with a gestural or multitouch interface would be keeping track of the identity of a particular finger tip once it has left the touch panel, it seems to me. But maybe someone has already solved this and it is in use – let me know if you know more.
(Thanks to one of my ex-students, Nico Marzian for mailing me the link).
Core77 have just posted an interview and profile I wrote on Dan Saffer and hhis new book, Designing Gestural Interfaces. Dan talks about his vision for future devices and the way design agencies need to shift to a much more multi-disciplinary way of working if they are to survive.
I have noticed I have been posting a lot of videos recently – I’m not sure if that’s me being lazy or that some things are simply a lot easier to explain when you see them in action (or interact with them).
One interface area that has not really changed a great deal over the years is in video editing and compositing. The two choices are timeline (such as you see in Final Cut, After Effects, etc.) or the kind of patch module used in Shake and other compositing tools. Both of these borrow heavily from their analogue roots (A-Roll/B-Roll film and video editing and optical printers).
If you have ever had to motion track a piece of video in order to glue a layer to a moving object in the video, you’ll know it’s pretty time consuming, even with the best of tools. This demonstration by Dan B Goldman from Adobe Systems shows how much easier this could be with a much more direct interface. I expect we can hope for it to be integrated into Adobe products at some point.
If anyone knows what this is all about, please leave a comment and let me know. In the meantime, enjoy the surreal interface.
[UPDATE: Apparently it’s the design portfolio of dutch flash designer Coen Grift – nothing like the ‘coffee’ in Holland to inspire some weirdness. The Dutch inspired us to make Antirom because of this.]
In keeping with the seemingly American obsession that more data one has the better (especially on TV), Sprint have launched a viral campaign called the Now Machine Widget.
Kottke says, “I don’t know what this is or how it works or why Sprint is involved, but man is it fun to just let the data just wash over you.” It’s kind of fascinating, but also a totally overblown data overload and the kind of thing that would be unusable in any practical sense. (I often wonder how traders manage to spread their attention across so many screens. My guess is it is an illusion and that they can’t – it just stops them having to bring different windows to the front.)
This developer from Infusion is showing off some of his modifications to Microsoft’s Surface at I Live To Code. The table has several cameras underneath instead of just one, so that he can affect the ripples and other interactions on the surface without touching it.
Perhaps the most interesting thing about the demo is the “new gesture” for tilting where he places the palm of his hand on one side of the screen and uses his forefinger and thumb to change the tilt angle. I’ve been trying to think what this is the equivalent too and it feels a bit like adjusting anything on a pedestal or tripod where you have to hold one part still to move the other. I’m not convinced it’s a gesture that is going to catch on because the palm-down hand blocks half the screen.
(Regarding the “Sponsored by Microsoft” link – this is experiment for Playpen too. It’s a sponsored clip by Unruly Media who have a pretty good ethics code. They encourage honest opinions and don’t try to be stealth marketers. I’m not entirely sure I want to have a great deal of sponsorship on Playpen, but the clip interested me anyway, so we’ll see. If you absolutely don’t want to give me an 18 cent kickback, you can watch it on YouTube)