Interaction with robots is the out-there end of interaction design’s spectrum. Far beyond just designing an interface on a screen, you need to design a whole set of facial expressions. That is, if you are trying your robot to look human.

The video above (sorry about the Reuters ad in front) shows just how difficult – and perhaps pointless – that approach is.

The project is led by Peter Jaeckel from the Bristol Robotics Laboratory in an attempt to directly tackle Masahiro Mori’s theory of the Uncanny Valley. The Uncanny Valley theory states that the more humanlike the appearance of a robot, the more we empathise with it, up to a point. But the closer it comes to being humanlike, the more we notice the imperfections that end up making it repellent again.

Apple’s touches to OS X, such as the way the log-in box shakes, like someone shaking their head, when you enter the wrong password, or the way the Macbook and iMac power lights ‘breathe’ when in sleep mode, are examples of how those human touches can create a sense of empathy. But I can’t help feeling the body movements are more important than the face sometimes and that the better way to go is more cartoonish and exaggerated, such as Domo from Rodney Brook’s Robotics Lab at MIT.

Jules, the robot from the Bristol lab, was created by Hanson Robotics and there are several clips of him (I want to write ‘it’, but that feels wrong somehow - there must be some empathy there) on YouTube, including quite a unsettling one where he ponders sexuality.

Jaeckel is trying to teach Jules to mimic human expressions better, but to me it still looks like Jules is either a serial killer, desperately needs to take a shit, or both. Either way, the empathy from my side is lacking. It doesn’t help that the electronic guts at back of his head are hanging out.

domo_banana.gif

Yet Domo’s big bug eyes already have me thinking he’s cute and the tender way he handles objects makes him seem much more real, or at least much more empathy inducing. (Or maybe it’s just the old the comedy banana-as-telephone routine).

When he arrived at MIT, Brooks shook up the field of AI by showing that simple rules embodied in a robot that could learn, felt apparently much more intelligent that a computer programmed to think through every logical step. I’m with him and Lakoff and Johnson on this one; you can’t understand the world as if the mind is a separate entity. Cognition is about the embodied experience and interaction designers need to remember that, even if it is just typing and mouse movements.

What seems to be going on with this concentration on the robot face is once again focussing on the head as the centre of experience. But anyone who has drawn a stickman fight knows it’s the body that counts.

Written by