The Apple Watch, skeuomorphism and metaphors

I had a Twitter exchange with John Gruber yesterday in response to his point about the Apple Watch and skeuomorphism:

I don’t think iOS or OS X needed to eschew skeuomorphic textures, but Apple Watch did.

Gruber was referring to Craig Hockenberry’s piece about the Apple Watch’s OLED display. In particular Hockenberry’s argument that the move to flatness was strategic:

I’ve always felt that the flattening of Apple’s user interface that began in iOS 7 was as much a strategic move as an aesthetic one. Our first reaction was to realize that an unadorned interface makes it easier to focus on content.

But with this new display technology, it’s clear that interfaces with fewer pixels have another advantage. A richly detailed button from iOS 6 would need more of that precious juice strapped to our wrists. Never underestimate the long-term benefits of simplification.

My response was that several of the Apple Watch faces are skeuomorphic, especially the Mickey Mouse one, to which Gruber replied “How so? I don’t see any 3D shadows or textures.”

You can read the back and forth that followed at your leisure, but the summary of the arguments is that I believe the dial faces are still screens pretending to be analogue/physical hands and dials (or Mickey Mouse watches) and thus skeuomorphic. Gruber doesn’t believe them to be inherently skeuomorphic.

Clock hands and dials exist because of the clock-making history of cogs, pendulums, springs and dials, the latter of which almost certainly took their form from sun dials. Digital versions of them are as skeuomorphic as fake digital knobs on screen-based software synthesisers.

Gruber argued that dials are not inherently skeuomorphic since

Analog clock design is useful on screen as any chart or graph. See the definition of ‘analog’

He also pointed out, quite rightly, that mechanical watches can have digital displays, such as the Groundhog Day clock and these (pretty ugly) examples of mechanical digital watches.

My point was not whether dial faces are useful or not. They clearly are, since many people are used to reading the time from dial faces and that’s how most of us learn about time as kids.

Dials are useful on digital displays because analogue—in the sense of continuous measurement instead of stepped, digital units—offer useful visual cues. Phrases like “a quarter of an hour” or “half-past nine” (or even the German version of “halb Zehn”, which means “half of ten,” a.k.a 9:30) are visual references to quantities in a circle. But it is exactly those references to previous technologies that makes dial faces on a screen skeuomorphic, in my view.

Most people don’t use a watch’s analogue nature that much, unless you’re timing something in seconds with a watch that has a sweep hand. In fact, analogue watch faces are not really continuous measuring devices in the strict definition of “analogue”, since the hands move in tiny steps as the ratchets click across the teeth of cogs. Also, you don’t usually stare at your watch for long periods time, but take glances at it, as Apple makes a point of telling us:

Since wristwatches were invented in the 19th century, people have been glancing at them to check the time. With Apple Watch, this simple, reflexive act allows you to learn so much more. We optimized your favorite apps for the wrist by developing Glances — scannable summaries of the information you seek out most frequently.

The OED lists two definitions of skeuomorphic:

  1. An ornament or ornamental design on an artefact resulting from the nature of the material used or the method of working it.

  2. An object or feature copying the design of a similar artefact in another material.

Wikipedia’s entry generally sides with the first definition, but the expanded example includes the second. Gruber’s original comment specifically says “textures,” which I have to admit I missed in my response. But the debate led me to think about many of the interesting ideas about interactivity contained within this term.

Skeuomorphism and metaphor are closely related and metaphor is an intrinsic part of interaction design. Arguably, skeuomorphs are just a visual subset of metaphor—plastic that looks like wood, screen-based calendars that look like paper and stitched leather—but sometimes the metaphorical relationships are more complex.

The Digital Crown of the Apple Watch interface is skeuomorphic in a broad sense too. Here I’m not arguing that the material metal of the Digital Crown is different from its forebears, but that “the nature of the material” includes what the interface controls. There are few technical reasons for the Digital Crown being the controlling interface. Apple could have used a non-moving touch sensor on the side, for example. It is a carefully thought-through aesthetic and interaction design decision. It makes sense to our perception and understanding—our mental models—of what a watch is. A crown is part of the watchness of a watch.

I would warrant that a tiny part of our brain has a mental model of the Digital Crown mechanically controlling the Apple Watch display, even though we consciously and intellectually know that is not the case. It’s the same reason we bang the side of our monitor when the computer isn’t working.

This a subtle interface magic trick that interaction designers pull off over and over again. We think we’re pinching and stretching a picture on a touchscreen, for example, but of course we’re wiping our fingers in a certain pattern across a pane of glass and not actually pinching anything.

Interestingly, there are few physical world equivalents of the pinch and spread actions that I can think of. The two obvious examples of this are what we do with our bodies and with dough—both things we learn to work with at the youngest of ages and probably why it feels so intuitive.

Metaphors tend to become ever more nested and complicated, especially in language, as Lakoff and Johnson argue in detail. Indeed, it is difficult to use language without using metaphors. That last sentence is full of them, for example. Metaphors and language are “tools” that can be “used.” In the next sentence, sentences are “vessels” that can be “filled.” (Once you start thinking this way, you’ll start to go mad trying to use language without them).

When interfaces go digital, albeit with some physical input devices, the boundaries start to collapse. In my PhD, I wrote about this conflation of the metaphorical and actual and used the example of files and folders:

This goes some way to explaining the issues of interface metaphors being half ‘real’ and half metaphorical and why Apple’s Exposé was able to break the desktop metaphor without it jarring. Because operating a computer is both physical and virtual the process gets blurred – at some point in the usage of a system that retains its metaphorical conventions fairly rigorously the ‘desktop’, with its ‘files’ and ‘folders’, ceases to be a metaphor for its users. It is as though the willing suspension of disbelief is not just suspended, but dispensed with. The desktop really is the desktop and our files really are our files and not just metaphorical representations – something that anyone who has experienced a hard drive crash and lost all their data will appreciate. (p. 53-54)

I used Apple’s Exposé back then as an example of what I called an “intentional metaphor.” Exposé breaks the desktop metaphor because I can’t actually make all my papers hover in the air while I choose the one I want and then have them snap back. But it does have a real-world equivalent in the form of spreading everything over a large table or on the floor to make sense of it. The extra magic part of Exposé—the “hovering in the air” part—is what I would really like to be able to do and I understand the metaphorical intention of it.

This is the way that I think Apple’s Digital Crown and also the Taptic Engine will also make sense to us. They connect into existing ideas of how we use and interact with things and people and extend them. Ex-Apple Human-Interface Inventor, Bret Victor, wrote a wonderful rant about this. I see this all as a form of interactive or intentional skeuomorphism and it will be interesting to see how this expands as designers and developers explore this new realm.

LeapMotion

LeapMotion is a USB device now available for pre-order that “creates a 3D interaction space of 8 cubic feet to precisely interact with and control software on your laptop or desktop computer.” According to the website

The Leap senses your individual hand and finger movements independently, as well as items like a pen. In fact, it’s 200x more sensitive than existing touch-free products and technologies. It’s the difference between sensing an arm swiping through the air and being able to create a precise digital signature with a fingertip or pen.

The video embedded above shows it off pretty nicely. The device itself is about the size of the power brick that comes with the Mac Minis or the AppleTV (or used to). It’s, not coincidentally, similarly designed, so it’s not going to look like some ugly chunk of plastic and LEDs on your desk. This is, I think, not to be underestimated if you are asking people to invest in a new kind of interface that will, indeed, sit on their desk to be stared at all day. People are pretty pernickety about what goes on their desk.

When I say invest in, I’m really talking about time. The device itself is pretty cheap at $69.99. I can see this being a bonanza for people making interactive installations and performative interfaces (which is why I came across it, thanks to Joel Gethin Lewis).

It looks like LeapMotion is responsive and accurate, but there is still the question of holding your hands in front of you all day. With a desktop version, I foresee an elbows-resting-on-the-table-while-wiggling-the-hands mode of usage. Perhaps it’s time to invest in an elbow rest Kickstarter project.

Little Digits – Finger Counting with the iPad

Little Digits (iTunes link) is a new iPad app from Chris O’Shea’s company Cowly Owl described as a “fun educational app that teaches children about numbers by putting a new spin on finger counting.”

Using the iPad multi-touch screen, Little Digits displays number characters by detecting how many fingers you put down. Children can learn to associate the number on the screen with the number of fingers they place down, whilst enjoying the unique characters and animations of the Little Digits world.

There are also games that introduce small addition and subtraction calculations, where you can work out the answer using the same multi-touch finger detection.

Speaking as someone whose three year-old regularly grabs my iPad, I can see she might enjoy this. She loves Nighty Night (as do I).

Interaction 11 Student Competition

All you interaction design students out there get ready to show us your goods.

This year I’m co-chairing the Interaction 11 Student Competition with Liz Danzico and we want to see you thinking laterally. The competition brings forward exceptional and engaged undergraduate and graduate students in both critical thinking and hands-on experience over the course of the conference. Itʼs an opportunity to present work in a way that shows rather than tells, and a unique opportunity for students who may be seeking to connect with new colleagues, potential employers, funders, or new networks.

This yearʼs focus is based on the concept of “Use, not own.” Great interactions can connect people to create opportunities for experiences that outweigh the “joy” of ownership. How we you reduce our environmental footprint by sharing products or services? Students selected by the team of mentors will be invited to the conference where theyʼll compete on the remainder of the competition.

The entry deadline is December 4th, 2010, so head over to the site and get yourself registered.

Oh, and do the sporting thing – reblog and retweet the announcement so that your student colleagues know about it!

Archetypes and Metaphors

There is an interesting piece over at Johnny Holland by Rahul Sen titled Archetypes and Their Use in Mobile UX. It’s probably worth reading it and coming back here, but the introduction gives you an idea of where he’s headed:

“Have you ever needed a user manual to sit on a good chair? Probably not. When we see a good chair, we almost always know exactly what to do, how to use it and what not to do with it. And yet, chairs are made by the thousands, and several challenge these base assumptions to become classics in their own right. The chair is one of the most universally recognized archetypes known to us. In light of recent events in the mobile realm, I believe that the stage is set to probe notions of archetypes in the mobile space.”

As does the last pull quote:

“Thinking in archetypes gives us a unique overview of interaction models and their intrinsic behavior patterns, making it possible to ask interesting what if questions and examine consequences.”

There is lots to like and he makes some great observations here, but hanging them onto the term “archetype” is problematic. Rahul gives a brief nod to the differences between metaphors and archetypes, but muddies rather than clarifies. This moment of slippage defeats the whole archetype argument, but if you replace the word archetype with metaphor in the piece, then it all makes great sense.

The reason why metaphors are so important to understand in interaction design is precisely because there are very few, if any, archetypes. It’s easy for us as savvy users and interaction designers to presume there are original ideas or symbols universally recognised by all, but they’re simply not. It’s the reason why so many people don’t ‘get’ interfaces that should be blindingly obvious. They don’t understand the mental model behind it, thus it’s not an archetype.

Metaphors are useful because they bridge this gap. One thing to note is that metaphors are not “analogies between two objects or ideas, conveyed by the use of one word instead of another,” as Rahul says. Those are similes. I’m not saying this to be grammatically pedantic, but because there is an important distinction. A metaphor isn’t saying “it is like“, but “it is“. It helps you understand a concept you don’t know by expressing it in the form of a concept you do know, not just saying it’s like the other one. Life is a journey, it’s not that life is like a journey.

An interaction design simile would say, “this file on the desktop is like a real paper document on your desk”. A metaphor is saying, “this file on your desktop (in fact, the icon of it) is a real file”. It makes a difference because it makes a difference to how we interact with those things and to the mental models we form. It makes a difference to how much we can stretch and/or break those metaphors. Delete your most precious file and decide whether it was like a file or really was one.

Lakoff and Johnson’s work on metaphors is essential to bring in here, because they demonstrate that our entire language and understanding of our experience in the world is based on embodied metaphors. When you start to pick apart language, you realise it’s all metaphors (such as “pick apart” – the metaphor being that language is a thing made up of other things that you can pull apart).

They also talk about how metaphors collapse into natural language without us thinking about them anymore, but they’re still metaphors. When we say we’re close to someone, we learn this metaphor from actually being physically close to someone (usually our mothers). Physical and emotion closeness are the same thing at that point. Later, we use the metaphor of being close to someone to express emotional closeness, but it because so commonplace and universally understood (in most languages) that we cease to perceive the metaphor anymore.

On the other hand, poetic metaphors, such as “the sun was a fiery eye in the sky”, are designed to make us perceive the metaphor and appreciate its discord or imagery. Most interface design is still on the poetry side of things, screaming out the metaphors, which is why they are far from being archetypes.

The interesting thing about multitouch devices is that the interface seems like it disappears. You feel like you are just interacting with the content in many cases, such as scaling or moving around digital photos that have never had a physical form. The interface is still there, of course. You’re not really stretching or pinching anything, you’re just making those movements with your fingers over a piece of glass, but the direct manipulate feeling that it affords tricks us enough. This still happens to a lesser extent in desktop metaphors – it really does feel like you have lost a file when it gets accidentally deleted, but actually it was never really a file, but a bunch of pixels on the screen pretending to look like a file and in fact just being a visual reference for a scattered set of magnetic impulses on a drive. Like theatre, we willingly suspend our disbelief in order to believe in the metaphor because it’s easier that way.

The strength of Rahul’s piece is in the various examples of something-centric “archetypes” that he gives and the “what if?” questions he asks about them. They’re insightful, but they’re just not archetypes by the definition he sets out. Ironically, having pointed out in a note right at the start of the article that he his not referring to Jungian archetypes, I think Rahul’s examples are much more closely related to Jung’s understanding of archetypes than the other definitions he refers to.

Photosketch

PhotoSketch: Internet Image Montage from Tao Chen on Vimeo.

Photosketch: Internet Image Montage provides a simple way to make image composites by doodling a picture, adding labels and then letting the engine scour the Internet for suitable photos. Once it has found the most appropriate matches, it composites them together.

I can see lots of awful e-cards and Powerpoint presentations coming out of this, but it would be very useful for putting together prototype sketches for installations and services and it is a pretty remarkable bit of technology.

(Via Richard Banks)

The Little Man in the Box

Hi from Multitouch Barcelona on Vimeo.

All of us anthropomorphise our machines, perhaps no more so than the car and the computer. Hi, A Real Human Interface from Multitouch Barcelona (an interaction design group that explores natural communication between people and technology) is a charming example of how we think about computers and interfaces from a human perspective.

Whatever we might know about the technology and how it works, we talk about the “server having some trouble” or our computers “having a bad day” or “going crazy”. We’re so biologically programmed for interaction to be with other beings, it’s very hard not to think of the little man in the box.

(Via @LukePittar and all the little people who run messages back and forth in the intertubes.)

Schematic and Public Multitouch Social Interaction

Touchwall Demo from Joel on Vimeo.

Joel Johnson’s exclusive (on Vimeo?) video and interview with the folks at Schematic about their new touchwall shows them dealing with some interesting public multitouch issues. I hate the marketing crap that goes with it and the inevitable Minority Report reference (please, stop making that reference multitouch people), but the idea that what they’re really interested in is “the social interaction in front of the screen” is spot on.

Apart from the fun of playing with what looks like a giant iPhone screen, the key thing about large multitouch screens is that more than one person can use it at once. If it just replicates a bank of individual screens it’s missing the point of having one big one. Connecting people together in social play and interaction can be really engaging and it will be interesting to see what developers and designers explore in this area.

The other issue that they talk about in the video is how to solve the identity problem on such a device so that you don’t have to walk up to it (or “into it” as one of the interviewees says) and type in a log-in. RFID tags come to the rescue, which means the wall knows who you are as soon as you’re close enough to use it.

If we’re going to make comparisons to Minority Report, that screen was an individual experience operated alone by Cruise’s character. By contrast a multi-user multitouch screen feels to me to be much more Star Trek or James Bond to me and about using collaborative workspaces with the added layer of data feeds.

Interaction Forum ’09

interactiondesign.jpg

I’m going to be giving a talk over at Interaction Forum ’09 at the Design School in Hildesheim next week (Tuesday 26th). If anyone is in that neck of the woods, come and say hello – maybe send me a tweet and we can catch up.

I’m going to be talking about play as guiding principles to interactivity, but I’m much more looking forward to listening to the other two speakers, Jona Piehl from Land Design Studio and Mark Hauenstein from AllOfUs.

New magneticNorth web site

mn_scribble_interface.jpg

Great to see magneticNorth’s new website live. Brendan gave me a sneak peek of it yesterday and I love it.

The navigation is very playful and intuitive. Actually it is intuitive because it is playful. You basically scribble a doodle and this makes a mask into which a piece from their portfolio opens. You can then click on that item to view more info about the work or simply make another scribble to look at a new piece. The navigation across the top is a history that you can move back and forth through or reset.

What is nice about the whole thing is that you just don’t have worry about doing anything ‘right’. You can scribble any shape and you can scribble over the top of other scribbles and everything automagically sorts itself out.

Go and have a play yourself and tell me what you think.

[UPDATE: Quite some debate started about this, which I’m very happy to be part of. I wrote a long response, which is almost a post in itself, but decided to leave it in the comments.]