Jonathan Harris on the Creative Review Blog


I seem to have been writing about Jonathan Harris rather a lot recently. Following the piece on Flash on the Beach I wrote in Creative Review in November, an interview I did with Harris has just been published on the Creative Review blog.

He had some interesting things to say about the nature of software and blogging in terms of human experience – surprising, perhaps, given his use of both of those technologies in We Feel Fine. We were discussing the nature of blogging and its lack of emotional context on the micro level and I felt that the snippets of blog posts in We Feel Fine reminded me of the beauty of found objects and notes that are usually removed from their context. Harris replied:

“The reason why that touches is you is because micro is beautifully done. A found object is powerful because you found it in the gutter. If you saw a digital representation of the picture with the text in 12pt Times New Roman it wouldn’t have the same nostalgia, it would be like a blog post.”

Whilst I was at my parents over Christmas, I dug through all my old photos and I know it was a very different feeling from browsing my Lightroom archive. I wonder what kind of experience it will be for my grandchildren, or whether I will have generated so much digital data that they won’t even bother.

It is an issue that really hasn’t been dealt with much, but is going to be a future headache and/or interaction and user experience challenge. It is an issue much like wondering what will happen to my online presences in the event of my death. For some reason I have been thinking about this quite a bit recently – I have some ideas for potential solutions, but they would need funding and security expertise that I don’t have, should anyone out there be interested in taking this further.

Interaction Design for Behavioural Change

Interaction design is all about changing people’s behaviour. Without the action > reaction part, there is no interaction. Whether you click one button instead of another or stop to play with an interactive shop window , the art of interaction design is about understanding that transaction. (And it’s the subject of my, hopefully soon finished, PhD. Sigh).

Taken to a broader context, these principles have been successfully applied in areas such as service design and sustainable design. It is something we tried to look at in the Visualising Issues in Pharmacy project too.

But what about economics? Robert Fabricant from Frog Design has written an insightful piece on Frog’s Design Mind blog called Design For Impulse. He makes a good point about interaction design education too:

“If I was starting an Interaction Design program (like Liz Danzico at SVA) or taking one over (like David Malouf at SCAD) the one academic subject I would be sure to cover is Behavioral Economics.”

He then goes on to quote David Leonhart’s New York Times article about behavioural economics and the Obama administration’s interest in it:

“Behavioral economics sprang up about three decades ago as a radical critique of the standard assumption that human beings behaved in economically rational ways. The behaviorialists, as they?’re known, pointed out that this assumption was ridiculous.”

To explain behavioural economics more simply, I’ll quote the next paragraph in the article:

“Would-be weight losers pay $100 a month to belong to a gym they rarely visit. Borrowers get fooled into taking out a loan with an appealing teaser rate. Patients fail to follow even a basic regimen of prescribed drugs — a failure that can leave them with serious medical complications and Medicare with big hospital bills.”

Essentially, we all do things that make no rational or logical sense, even if we say we wouldn’t. And we’re especially irrational with money – who hasn’t shopped around for a tiny saving on groceries and then stopped to drink an over-priced coffee afterwards, negating the savings? (Dan Ariely’s book, Predictably Irrational is a good starting point, apparently. I haven’t read it yet.)

As the world we interact with becomes ever more interconnected and our need to understand everything from the economics of what we are designing through to the life-cycles of everything we use, understanding this psychology becomes essential. For interaction designs (and, I would add, some product designer and architects), this kind of thinking is, or should be, built into what we do. As Fabricant says:

“Outputs, Outcomes and Impacts are VERY different things and clients often confuse the two. As an Interaction Designer you better know the difference.”

It seems to me that Obama’s administration understand the psychology of interconnectedness very well. It will be interesting to see if they can put it to work on such a large, messy problem.

Out with the economists, in with the interaction designers I say!

(Once again, thanks to the ever-excellent IxDA discussion list for the heads up).

Interactive Video Object Manipulation

Interactive Video Object Manipulation from Dan Goldman on Vimeo.

I have noticed I have been posting a lot of videos recently – I’m not sure if that’s me being lazy or that some things are simply a lot easier to explain when you see them in action (or interact with them).

One interface area that has not really changed a great deal over the years is in video editing and compositing. The two choices are timeline (such as you see in Final Cut, After Effects, etc.) or the kind of patch module used in Shake and other compositing tools. Both of these borrow heavily from their analogue roots (A-Roll/B-Roll film and video editing and optical printers).

If you have ever had to motion track a piece of video in order to glue a layer to a moving object in the video, you’ll know it’s pretty time consuming, even with the best of tools. This demonstration by Dan B Goldman from Adobe Systems shows how much easier this could be with a much more direct interface. I expect we can hope for it to be integrated into Adobe products at some point.

If you want to get technical, you can a PDF of Dan’s research paper is available on his site.

(Via Designing For

Line-Drawings, Cameras and New Videogames

Karl reminded me of two new games for the Playstation that depart from the normal 3D extravaganza. The first is another EyeToy game called EyePet. Basically you draw with a special pen and your doodles become 3D and part of the mixed-reality world of the game and your virtual ‘pet’.

EyePet Hd from Nkio on Vimeo.

The second is Echochrome, which seems to be a bit like Portal (PC DVD), except that it is in a plain, wireframe and stickman style:

Echochrome HD Gameplay from Banzaiaap on Vimeo.

It’s very encouraging to see this trend towards games that designed from a point of view of ingenuity rather than pure 3D rendering power. There’s nothing wrong with full-on 3D games rendered in luscious detail, but I don’t feel games as a medium progress much when that’s the only focus.

There is little difference between the basic gameplay of Wolfenstein 3D:


and Call of Duty:


Apart from the amount of pixels you are shooting at of course.

Children Playing Video Games


The NY Times web site has a great video of children playing videogames from photographer and video artist, Robbie Cooper (you can watch a higher quality original plus stills on his site).

In 2009 he will be teaming up with the Media Centre at Bournemouth University as part of their ‘War and Liesure’ project. They will then analyse the footage using Paul Ekman’s Facial Action Coding System (FACS). (I didn’t realise that Ekman had published so many books with the all the images of his research).

I don’t get the feeling that Cooper is judging gamers or videogames either way, more that he is fascinated children as they play them, particularly war games because war is outside (most) children’s daily experience.

His blog is also worth having a look through, there are some great finds there including his responses to the comments about Immersion.

Should you feel the need for the antidote, I can recommend Steven Johnson’s Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter.

Humans Aren’t So Bad After All

Fifty People, One Question: New York from Crush & Lovely on Vimeo.

I twittered about this the other day and I know it’s been doing the rounds of the interweb, but wanted to post about it properly.

The film has nothing to do with interactivity in the sense that I normally write about it here, but has everything to do with interacting with people. It’s a project by Crush & Lovely and Deltree called Fifty People, One Question.

When I first watched this, the attacks in Mumbai has just happened and I was in a terribly mood thinking about how awful people can be to each other. Then I watched this video and remembered just how wonderful people can be too. It made my day. I just watched it again, and it’s made it again.

(It’s also a great example of indy filmmaking, 80 hours of editing and depth-of-field use with a HV20 video camera. More on the set-up and process over at Deltree’s blog).

Robots Ain’t Got No Body

Interaction with robots is the out-there end of interaction design’s spectrum. Far beyond just designing an interface on a screen, you need to design a whole set of facial expressions. That is, if you are trying your robot to look human.

The video above (sorry about the Reuters ad in front) shows just how difficult – and perhaps pointless – that approach is.

The project is led by Peter Jaeckel from the Bristol Robotics Laboratory in an attempt to directly tackle Masahiro Mori’s theory of the Uncanny Valley. The Uncanny Valley theory states that the more humanlike the appearance of a robot, the more we empathise with it, up to a point. But the closer it comes to being humanlike, the more we notice the imperfections that end up making it repellent again.

Apple’s touches to OS X, such as the way the log-in box shakes, like someone shaking their head, when you enter the wrong password, or the way the Macbook and iMac power lights ‘breathe’ when in sleep mode, are examples of how those human touches can create a sense of empathy. But I can’t help feeling the body movements are more important than the face sometimes and that the better way to go is more cartoonish and exaggerated, such as Domo from Rodney Brook’s Robotics Lab at MIT.

Jules, the robot from the Bristol lab, was created by Hanson Robotics and there are several clips of him (I want to write ‘it’, but that feels wrong somehow – there must be some empathy there) on YouTube, including quite a unsettling one where he ponders sexuality.

Jaeckel is trying to teach Jules to mimic human expressions better, but to me it still looks like Jules is either a serial killer, desperately needs to take a shit, or both. Either way, the empathy from my side is lacking. It doesn’t help that the electronic guts at back of his head are hanging out.


Yet Domo’s big bug eyes already have me thinking he’s cute and the tender way he handles objects makes him seem much more real, or at least much more empathy inducing. (Or maybe it’s just the old the comedy banana-as-telephone routine).

When he arrived at MIT, Brooks shook up the field of AI by showing that simple rules embodied in a robot that could learn, felt apparently much more intelligent that a computer programmed to think through every logical step. I’m with him and Lakoff and Johnson on this one; you can’t understand the world as if the mind is a separate entity. Cognition is about the embodied experience and interaction designers need to remember that, even if it is just typing and mouse movements.

What seems to be going on with this concentration on the robot face is once again focussing on the head as the centre of experience. But anyone who has drawn a stickman fight knows it’s the body that counts.