I had the pleasure of chatting with 31Volts’ Marc Fonteijn on the Service Design Show the other day. We talked about the possible boundaries of service design and it’s fractal nature and I completely had a brain freeze in the middle of talking about feasible, viable and desirable. Here’s the resulting interview:
Having finally relocated to a permanent address in Sydney and re-docking with government and utilities, I’ve been experiencing the whole gamut of customer services. There are a whole host of things to register for and the way companies go about it is different every time.
The good news is that most of this is much better. I first got to Australia in 1999 and left in 2006 and I have many memories of having to go to government offices in person or being on hold to utility companies for ages. But there is still a lot of work to do.
Several companies have adopted the post-registration follow-up strategy. I can just see it as a sticky note touchpoint moment on some service or CX designer’s customer journey. The problem is many companies still have a view of the power relationship firmly placed in their camp – it’s still inside out. Here is what happened when my energy company, AGL, called me month into my contract with them. At least I assume it really was AGL:
Random caller on my mobile: “Hi, this is X from AGL, am I speaking to Mr Andrew Polaine?”
Me: “Er, yes.”
AGL: “Great. So I just wanted to welcome you to AGL and check that everything was set up on your account the way you want it.”
At this point I’m thinking, it’s a bit late, but one billing cycle in, so I understand why. And it’s a nice touchpoint so far. Then we hit an impasse:
AGL: “Before I go any further, I need to confirm some security details. Can you tell me your street number and name or give me your date of birth?”
Me: “Sure. But you just called me so I need to make sure you are actually from AGL. Can you tell me the last three digits of my account number?”
AGL: “I’m afraid I can’t do that until you confirm your account details.”
Me: “But I don’t know who you are. Do you not have any way to prove you are from AGL?”
AGL: “I’m sorry, I can’t give you any details until I confirm you for security purposes. But I understand if you are uncomfortable with this, so you can just give us a call anytime.”
The call centre contact was perfectly pleasant, but put in an impossible situation by policy and hamstrung by her script. It also turned something meant to be a pleasant, proactive touchpoint into work for me to do having to call them back. It also goes against the mental model of these kinds of interactions that other services, such as banks, have built in our heads – don’t give out your details to random callers.
This approach evidenced inside-out thinking, not customer centricity. The policy is probably “on all calls customers must identify themselves,” but the real world equivalent of my call was someone ringing my doorbell and asking me to prove I lived there when I answered the door.
Thinking through and acting out those kinds of interactions as if they were in-person and personal relationships is a simple way to get them right. In this case, AGL could have come up with a way to do a reverse ID check and even communicated this when I first signed up so I knew what to expect. It’s not a huge transgression, but multiple moments like that add up to a choppy experience. Thankfully AGL have been pretty good so far.
Just a quick note to say I’m heading to Melbourne for UX Australia 2016 where I’ll be running my Design a Service in Six Hours workshop and also doing a presentation called Design to the Power of Ten looking at the fractal nature of service design.
The he workshop is sold out (thanks everyone who bought a ticket!) but the talks are all open to conference goers. Looking forward to seeing you there. Please come and say hello!
There’s an interesting short piece over at Sustainable Brands asking whether we need a new kind of CEO – a Circular Economy Officer. They interviewed my brother who makes a good case for industrial designers:
Matt Polaine, former circular economy research lead at BT, says a key remit of any circular economy role should be to understand materials flows in both directions — upstream and downstream of the value chain. Such a function requires the ability to tap into different skill sets: design, procurement, compliance, product innovation, and insurance/risk expertise to name a few. Because of this, Polaine believes the skills of an industrial engineer stand out from the rest. “This mindset has to understand the materials, the way the product is manufactured, used, the user interface/service design, and the end-of-life aspect. They are also clear about aesthetics, the beauty of the product and experience in use. For the circular economy to flourish, the customer experience must work very well and promote advocacy.”
It’s clear how this thinking connects with service design’s aim to break down silos and embed joined-up thinking within organizations, but ultimately focused on a superb end experience for the customer or user. Without this last aspect, all the great technology or sustainable solutions in the world are for nought if customers just use something less circular but with a better experience. If we’re asking people to sacrifice something or change behaviours, we need to offer them something better in return.
Photo: Steve Jurvetson
When Google’s self-driving Lexus cut off Delphi Automotive’s self-driving Audi, forcing it to take “appropriate action” by aborting a lane change, the near miss between them was reported in terms of the technology and liability. With the exception of Reid Hoffman’s thoughtful piece, Driving in the Networked Age, the brand and service experience of driving has been largely ignored in the public discourse.
Neville Anthony Stanton’s post on The Conversation led to a Twitter conversation between myself, Dan Hill and Tom Coates questioning Stanton’s rather dry account of man versus machine. Stanton raises the questions of responsibility and insurance and how human’s will never rival a machine’s ability to drive, but that seems to ignore the history of driving and, indeed, other forms of transport before that.
The automative industry has spent decades and a fortune on shaping the brand experience of driving and that’s not going to go away overnight, nor are those manufacturers going to want to lose control of it as we lose control of our cars.
Driving styles are algorithms
The Google-Delphi near-miss was really about a clash of algorithms.
That the Google car was a Lexus and the Delphi car an Audi might be superficially irrelevant, but people drive Audis, BMWs, Porsches, Volvos, Hondas, Toyota Camrys or (the poorly pluralised) Lexuses for a certain kind of driving experience and because they represent a certain kind of personality. As manufacturers evolve their own self-driving cars, can we expect these characteristics to form part of their algorithms? For example, might we expect a Camry to drive perfectly within the speed limit and never cut anyone up? Or a Porsche or Audi TT to have a sports mode, heavy on the acceleration and uncomfortably fast around the corners, since computers don’t lose their nerve and slow down? Might BMWs live up to their reputation and be programmed to tailgate the cars in front while flashing their lights and being sure to cut in at the front of filter lanes?
Clichés for sure, but long-standing brand experience clichés. Witness Toyota desperately trying to change their boring brand with their “bold new” 2015 Super Bowl ad and BMW trying to shed their petrolhead image with their i3 2015 #hellofuture Super Bowl ad, when the honest reality is of BMW is more like their “Adrenaline” ad.
If all self-driving cars are programmed to be the perfect, law-abiding driver, what is the point of owning one brand of car over another? Every car is, functionally, the same – it’s a box on wheels that gets you from A to B carrying more or less people or stuff over smooth or rough terrain depending on its class. The experience is, of course, a key differentiator. Is it sporty or sedate? Can you hear and feel the engine or is the ride smooth and silent? Is it leather luxury or can my kids eat chips in the back?
As we know from smartphones, tablets, computers and operating systems, all of which are functionally very similar, this is where UX, service design, product design and computer science blend together to make the difference for end users.
Navigation algorithms are brand experiences
The battle between in-car navigation systems and smartphones has largely been won by smartphones. The various integrations, such as CarPlay, are the supposed death knell of car manufacturers’ own systems, but self-driving cars might take back much some ground here, unless an open set of self-driving car protocols and APIs allows smartphone manufacturers and developers to hook into those systems.
Self-driving cars need their own navigation by default. Will we see traffic jams of one brand of car, as all their systems re-route to the same roads? Will we see certain brands make gains in the market because their navigation is superior to another?
Audi, BMW and Daimler are buying Nokia’s mapping service, Here, precisely because of this issue, writes William Boston in the WSJ:
The car makers feared that Nokia Here’s technology—the most advanced digital map of the world’s major road networks—could fall into the hands of Google Inc., Uber Technologies Inc. or Apple Inc. That would put auto makers at risk of losing control of information systems inside the car that are vital to self-driving cars and future automotive safety systems.
My experience with Audi’s, for example, has been one of decent cars with entertainment and information systems that are two decades behind. For the most part, in-car information systems are regularly disappointing, if not downright confusing. An argument for cars trailing behind current UX standards used to be that people don’t change their cars as often as their mobile phones. Your 15 year-old Camry is the driving equivalent of a Nokia 8250.
Many people lease cars in a three-year cycle, however, and whenever I rent a new car, it is always shocking how poor the UX of the dashboard is. It feels like the pre-iPhone days where the hardware was produced with zero integration with the shabby software (I’m looking at you, Sony Ericsson).
Self-driving cars even out many hardware differences of the car itself, leaving the service and user experience as the paramount reason for choosing one over another.
What will “choosing” a car mean in the future? Right now, manufacturers are still obsessed with selling millions of units. “Service” is something that might happen when you buy your car if you’re lucky and when you take it in to have the oil changed. But the future of cars will be about customers choosing a particular service experience, not owning a chunk of steel and plastic.
Customers will pay to have access to a particular fleet. Will this take the Uber model with limousines and everyday cars? Will I go with a Google car because their routing is better and it’s free, or will I pay extra for a Apple car because of the privacy? Or will this fall along the current brand lines – taking a Volvo for my family trip to be safe, but a Lexus to a business meeting?
If I do own my own self-driving car, can I loan it out to the fleet when I’m at work in return for credit for my own transport elsewhere? Or will owning a car cease to be a positive status symbol and take on a negative connotation, like owning an old mobile phone or a stack of CDs.
For many contemporary services experiences, such as banking, insurance, healthcare, communications and cloud services, trust is paramount to the the service offering. You need to know your insurance company isn’t going to let you down at the worst moment, your bank account isn’t going to get hacked, that your healthcare services will make you better, not kill you, and that your communications and cloud services providers will not sell your data or pass it on to government snoopers.
Now combine all that with the trust you have in your car not failing in some way with life threatening consequences. Trust is fragile. It takes a long time to build up and is easily broken. When a component fails on a car it is more visible and they can be recalled. A software glitch or hack while a whole fleet of cars are currently driving is an invisible horror about to unfold simultaneously.
Privacy, hacking and the social divide
Hoffman argues that self-driving cars have the ability to democratise driving even further:
[A]utonomous vehicles won’t curtail personal freedom – they’ll amplify it. Autonomous vehicles will extend the convenience of individualized driving to people who aren’t currently able to obtain driving licenses –senior citizens, people with various disabilities, young people. They will let everyone pursue a greater range of activities while they’re in transit. They’ll speed up transit times and help people forsake transit altogether. (I.E., your car will run errands for you while you stay at home.) They’ll reduce the need to actually own a car, and thus release people from the economic obligations of that.
This may be true, but they may well serve to create even deeper social divisions. If you’re stuck in a traffic jam in a Mercedes today, you’re in the same position as the person sitting in front of you in their beaten up old Ford. The prospect of paying extra for swifter transit – a kind of non net-neutrality for roads – could turn taking a car journey into one big airport experience. Those with the expensive tickets get to go first, go faster, have less hassle, while the rest of us sweat and swear.
Most likely the divide will be about who is prepared to give up their privacy for the sake of free or near-free travel. You can turn your phone off or enable airplane mode if you want to travel somewhere without being traced by your cellphone signal, but you can’t turn off a self-driving car’s navigation system, unless you can take over driving manually.
We can be sure that tech companies and government agencies are looking forward to the delicious combination of credit card data, realtime audio, camera and navigation data feeds that all of us will be transmitting every day. The wealthy might pay to take an anonymised journey, while the poor have to put up with being tracked and collated. The wealthy will have the evidence to counter a police offer’s version of events, while the poor will be dragged out of their cars and arrested.
If it is already possible to remotely hack a car on the highway and send it off the road, imagine how much easier it will be once those cars are self-driving. Law enforcement officers can, literally, pull you over and detain you by locking you in your car. Hackers will no doubt come up with ways to own a Google car and tune it to their own tastes. Expect to see a side industry of third-party services and applications, such as car virus protection and journey history deletion or scrambling. What will be the equivalent of using a VPN and Tor browser for cars?
The intersections of different industries and regulations need careful consideration. Hoffman writes:
Even in cases of non-emergency, a high degree of transparency is necessary. Every time a passenger indicates a desired destination, an autonomous vehicle must make choices about the optimal route. Presumably, it will do so based on current traffic conditions, as Waze does now. But it’s also possible that the companies designing these cars could choose routes for other reasons. For example, advertisers might pay companies to route passengers past their businesses. Passengers with preferred status could receive access to faster streets while others are routed to slower, higher-volume streets.
In some cases, passengers may accept these decisions. You might pay less or receive some other perk if you agree to take the slow route home, or pay more to take the fast one. On a similar note, we will probably see the introduction of literal “marketing vehicles,” i.e., cars that take you to your destination for free as long as you complete a survey or watch a promotional video of some kind.
Because the various algorithms that govern car behavior will encompass issues of liability, risk, and morality, no one company should be allowed to simply make up their own rules. Instead, we’ll need to establish uniform rules and standards through public processes. In the same way that we currently have regulations involving emissions standards, safety equipment, and other aspects of car manufacture, we’ll also have regulations that establish the parameters for how the necessary algorithms operate.
Hoffman appears surprisingly optimistic about this, but I am less so. Politicians and manufacturers do not have a great track record of considering the nuances of complicated futures and agreeing on a unified plan of action. Witness everything from USB connectors to tackling climate change.
Six different USB connectors – Photo: Viljo Viitanen
Too focused on catching up with the present, car manufacturers seem rather complacent about the future. Andy Greenberg’s piece on hacking cars in WIRED this week demonstrated the bland corporate-speak response to Charlie Miller’s and Chris Valasek’s research into hacking and taking control of cars remotely:
When WIRED told Infiniti that at least one of Miller and Valasek’s warnings had been borne out, the company responded in a statement that its engineers “look forward to the findings of this [new] study” and will “continue to integrate security features into our vehicles to protect against cyberattacks.” Cadillac emphasized in a written statement that the company has released a new Escalade since Miller and Valasek’s last study, but that cybersecurity is “an emerging area in which we are devoting more resources and tools,” including the recent hire of a chief product cybersecurity officer.
To my ears, this sounds like the PR departments of car manufacturers who are absolutely behind the curve on this. The good news is that this all provides an opportunity for designers to move beyond car styling and engage in the entire experience of mobility and a service, of which the car is just one component. The opportunities for innovation and developing new and useful experiences and services are tremendous. Let’s hope the car manufacturers see the strategic benefits here and don’t just try and cling on to their current business models, which are sure to go the way of the horse and cart.
I have some big news to announce.
After six years teaching and researching service design at the Hochschule Luzern I will be leaving my post there at the end of August. It has been an informative and formative time for me.
Thanks to all my former students on whom I have inflicted my prototypes of how to teach service design. Also my colleagues who have put up with me complaining about our own, internal, services. They have all given me some great insights and experiences over the years.
The biggest part of my news is that my family and I will be returning to Sydney, Australia in early January 2016. My new position will be as a Service Design Director for Fjord’s Fjord’s Service Design Academy in their Sydney office. The Sydney office is headed up by Bronwyn van der Merwe — The Service Design Director there.
As well as working under Fjord’s Group Director of Organizational Evolution, Shelley Evenson, helping to shape and teach service design and innovation within the group in Australia and globally, I’ll also involved in providing strategic input, mentoring and guidance on client engagements. I really couldn’t ask for a more suitable job description to match my skills.
Since Fjord was acquired by Accenture Interactive almost two years ago, it opens up the challenges of working in such a large organisation, but also the opportunities of working at an enterprise scale with a level of access to top tier clients that few design agencies get the chance to do. I have a healthy mix of excitement and anxiety about the whole move and new position, but I’m really looking forward to doing some great work with talented colleagues. Feel the fear and do it anyway, as they say.
Obviously the excitement is tinged with sadness about leaving behind family, friends, cats and our lovely apartment that we renovated only 18 months ago. But we are compensated by reigniting our old friendships in Sydney, great working challenges, lifestyle, yoga, food, weather and beaches. Oh, and a Prime Minister with the rhetorical flair of a ten year-old bully, but incompetent politicians seem to be par for the course everywhere at the moment.
The other noticeable change from when we used to live in Australia is that smartphones and social media happened. Despite the time and attention sucking negatives of these, I feel somuch more connected to friends and family back in the UK and around the world than in the pre-2006 Dark Ages that I sometimes forget I haven’t actually seen these people for some time.
HSLU will be looking for a person or people to replace me before September, so if you are interested in a mix of teaching and research in Service Design and, ideally, speak German and English, let me know. There will obviously be an official process, but we want to put the feelers out already.
See you on the sunny side of the planet.
(P.S. That is not Fjord’s actual logo in the photo above. I walked past it the other day at Europa Park. It must be a sign.)
I don’t think iOS or OS X needed to eschew skeuomorphic textures, but Apple Watch did.
Gruber was referring to Craig Hockenberry’s piece about the Apple Watch’s OLED display. In particular Hockenberry’s argument that the move to flatness was strategic:
I’ve always felt that the flattening of Apple’s user interface that began in iOS 7 was as much a strategic move as an aesthetic one. Our first reaction was to realize that an unadorned interface makes it easier to focus on content.
But with this new display technology, it’s clear that interfaces with fewer pixels have another advantage. A richly detailed button from iOS 6 would need more of that precious juice strapped to our wrists. Never underestimate the long-term benefits of simplification.
My response was that several of the Apple Watch faces are skeuomorphic, especially the Mickey Mouse one, to which Gruber replied “How so? I don’t see any 3D shadows or textures.”
@apolaine How so? I don’t see any 3D shadows or textures.— John Gruber (@gruber) March 24, 2015
You can read the back and forth that followed at your leisure, but the summary of the arguments is that I believe the dial faces are still screens pretending to be analogue/physical hands and dials (or Mickey Mouse watches) and thus skeuomorphic. Gruber doesn’t believe them to be inherently skeuomorphic.
Clock hands and dials exist because of the clock-making history of cogs, pendulums, springs and dials, the latter of which almost certainly took their form from sun dials. Digital versions of them are as skeuomorphic as fake digital knobs on screen-based software synthesisers.
Gruber argued that dials are not inherently skeuomorphic since
Analog clock design is useful on screen as any chart or graph. See the definition of ‘analog’
My point was not whether dial faces are useful or not. They clearly are, since many people are used to reading the time from dial faces and that’s how most of us learn about time as kids.
Dials are useful on digital displays because analogue—in the sense of continuous measurement instead of stepped, digital units—offer useful visual cues. Phrases like “a quarter of an hour” or “half-past nine” (or even the German version of “halb Zehn”, which means “half of ten,” a.k.a 9:30) are visual references to quantities in a circle. But it is exactly those references to previous technologies that makes dial faces on a screen skeuomorphic, in my view.
Most people don’t use a watch’s analogue nature that much, unless you’re timing something in seconds with a watch that has a sweep hand. In fact, analogue watch faces are not really continuous measuring devices in the strict definition of “analogue”, since the hands move in tiny steps as the ratchets click across the teeth of cogs. Also, you don’t usually stare at your watch for long periods time, but take glances at it, as Apple makes a point of telling us:
Since wristwatches were invented in the 19th century, people have been glancing at them to check the time. With Apple Watch, this simple, reflexive act allows you to learn so much more. We optimized your favorite apps for the wrist by developing Glances — scannable summaries of the information you seek out most frequently.
An ornament or ornamental design on an artefact resulting from the nature of the material used or the method of working it.
An object or feature copying the design of a similar artefact in another material.
Wikipedia’s entry generally sides with the first definition, but the expanded example includes the second. Gruber’s original comment specifically says “textures,” which I have to admit I missed in my response. But the debate led me to think about many of the interesting ideas about interactivity contained within this term.
Skeuomorphism and metaphor are closely related and metaphor is an intrinsic part of interaction design. Arguably, skeuomorphs are just a visual subset of metaphor—plastic that looks like wood, screen-based calendars that look like paper and stitched leather—but sometimes the metaphorical relationships are more complex.
The Digital Crown of the Apple Watch interface is skeuomorphic in a broad sense too. Here I’m not arguing that the material metal of the Digital Crown is different from its forebears, but that “the nature of the material” includes what the interface controls. There are few technical reasons for the Digital Crown being the controlling interface. Apple could have used a non-moving touch sensor on the side, for example. It is a carefully thought-through aesthetic and interaction design decision. It makes sense to our perception and understanding—our mental models—of what a watch is. A crown is part of the watchness of a watch.
I would warrant that a tiny part of our brain has a mental model of the Digital Crown mechanically controlling the Apple Watch display, even though we consciously and intellectually know that is not the case. It’s the same reason we bang the side of our monitor when the computer isn’t working.
This a subtle interface magic trick that interaction designers pull off over and over again. We think we’re pinching and stretching a picture on a touchscreen, for example, but of course we’re wiping our fingers in a certain pattern across a pane of glass and not actually pinching anything.
Interestingly, there are few physical world equivalents of the pinch and spread actions that I can think of. The two obvious examples of this are what we do with our bodies and with dough—both things we learn to work with at the youngest of ages and probably why it feels so intuitive.
Metaphors tend to become ever more nested and complicated, especially in language, as Lakoff and Johnson argue in detail. Indeed, it is difficult to use language without using metaphors. That last sentence is full of them, for example. Metaphors and language are “tools” that can be “used.” In the next sentence, sentences are “vessels” that can be “filled.” (Once you start thinking this way, you’ll start to go mad trying to use language without them).
When interfaces go digital, albeit with some physical input devices, the boundaries start to collapse. In my PhD, I wrote about this conflation of the metaphorical and actual and used the example of files and folders:
This goes some way to explaining the issues of interface metaphors being half ‘real’ and half metaphorical and why Apple’s Exposé was able to break the desktop metaphor without it jarring. Because operating a computer is both physical and virtual the process gets blurred – at some point in the usage of a system that retains its metaphorical conventions fairly rigorously the ‘desktop’, with its ‘files’ and ‘folders’, ceases to be a metaphor for its users. It is as though the willing suspension of disbelief is not just suspended, but dispensed with. The desktop really is the desktop and our files really are our files and not just metaphorical representations – something that anyone who has experienced a hard drive crash and lost all their data will appreciate. (p. 53-54)
I used Apple’s Exposé back then as an example of what I called an “intentional metaphor.” Exposé breaks the desktop metaphor because I can’t actually make all my papers hover in the air while I choose the one I want and then have them snap back. But it does have a real-world equivalent in the form of spreading everything over a large table or on the floor to make sense of it. The extra magic part of Exposé—the “hovering in the air” part—is what I would really like to be able to do and I understand the metaphorical intention of it.
This is the way that I think Apple’s Digital Crown and also the Taptic Engine will also make sense to us. They connect into existing ideas of how we use and interact with things and people and extend them. Ex-Apple Human-Interface Inventor, Bret Victor, wrote a wonderful rant about this. I see this all as a form of interactive or intentional skeuomorphism and it will be interesting to see how this expands as designers and developers explore this new realm.
REWIND to 1995 – A collective of young Londoners launches Antirom, a CD-ROM of experimental interactive software, at Cameraworks gallery in Bethnal Green. The many brief, playful, funny ‘toys’ on the disc have quite an influence in interaction design circles.
FFWD to 2015 – Generations of computer hardware rush past leaving Antirom unplayable on any current device.
But now Antirom is coming back to the East End so you can have a go (again?). We’re having a party, and talking about interaction design hosted by Protein’s Studio 2 Gallery at EC2A 3EY.
There’s a panel discussion and demos on Friday 27th Feb and a party in the evening. Saturday 28th will see another panel discussion about the history of the interactive interface and a chance to drop-in and play with some of these early interactives on the original hardware.
I’m flying to London for a couple of days just to be there, so I would love to see you there.
Some of the events need a (free) RSVP so we can gauge numbers. You can find all the details on the antirom website.
This post is really a note-to-self for when I next have to remember how to deal with missing photo and QuickTime movie metadata. Nevertheless, since it took me a little while to work out, maybe someone else searching for the same issue will benefit from it. If you’re short on time the ExifTool command that helped me is right at the end – skip down there and copy and paste.
For anyone else who can’t sleep, it might be useful, but I won’t be offended if you skip reading this.
The photo management problem
Like many people, my photo libraries have grown to many gigabytes over the years, encrusted with cruft from various photo management apps. Multiple versions of iPhoto and Lightroom, not to mention a few corrupt libraries and recoveries have left their scars. Since I had an iPhone, everything got worse. Do I use iCloud, Dropbox or iPhoto? I used to religiously use Lightroom, but it was always a drag importing photos from my iPhone into Lightroom and sorting them. Because it was a drag, I didn’t do it all that often and I lost track of which photos were where.
Of all things that would be hard to replace if all my devices caught fire and all my backups failed, the photos would be the one I care about most. Almost everything else is either replaceable, recoverable or possible to take a Zen-like attitude to letting go of. But I would be very sad to lose the photos of my daughter when she was tiny.
I tried various automatic services, such as Dropbox’s automated Camera Upload feature and I also tried Everpix before they shut down and then moved to Loom, but then were acquired by Dropbox. Back then, the whole point of using another service was because I didn’t have enough space on Dropbox for over 72GB of images and videos. Since Dropbox upgraded their plans, I know have the opposite problem, which is that my 1.1TB of cloud space is larger than my laptop’s hard drive.
Having tried all these services on and off, I could no longer remember which photos I had backed up from my iPhone and where. My iPhone was getting too full and I was nervous to delete older images in case that was the only copy.
I decided to follow most of Federico Viticci’s photo workflow and go for near Camera Roll Zero. I use Camera Sync on the iPhone to upload all my photos to Dropbox and use Hazel to rename and sort them based on their metadata. Once uploaded I keep a handful of photos of my family on my iPhone, but otherwise delete all the others.
Similar to Federico, my Hazel rules delete screenshots, since I generally don’t want to keep them, but I also have Camera Sync set up not to upload screenshots anyway:
Then I have a rule to sort the photos into date based folders, by year and then simply date. The files are also renamed with date and time, such as “2015-01-02 3.08.29.jpg”:
Videos are renamed and sorted into a video sub-folder of each year:
Deadly exciting, I know, but I’ve wasted so much time searching for photos and so much drive space with duplicates that the time spent getting this to work has been fantastic for me. I usually have a pretty good memory of roughly when important events happened in life and there are plenty of ways to view folders of images as thumbnails (see Federico’s article for a round-up of these). Duplicates are now easily spotted, because they are in the same folder with the same filename (a number is appended if filenames are identical).
The stripped EXIF data problem
This approach has one major drawback, enforced upon it by other apps. It relies on accurate file creation dates and EXIF data – the metadata stored in all image files. Sometimes this data is missing or incorrect.
My Hazel rules sort files based on their creation dates. One of the previous photo storage cloud services – either Loom or Everpix (the culprit, I think) – re-stamped the file creation dates. This meant I had a bunch of different images all with the same creation dates and, thus, all the same filename if I ran my Hazel rules on them.
I thought I could probably recover the proper dates by using the EXIF data. There are apps to view and extract this and in fact Hazel can examine some EXIF data in its “other” settings:
But I needed something more powerful and Phil Harvey’s excellent ExifTool came to the rescue. With it, recreating the correct file-creation date based on the EXIF data is trivial.
exiftool -a /Users/andypolaine/Dropbox/Camera\ Uploads/2015/2015-01-27/2015-01-27\ 9.38.57.jpg
gives you this enormous output:
ExifTool Version Number : 9.70 File Name : 2015-01-27 9.38.57.jpg Directory : /Users/andypolaine/Dropbox/Camera Uploads/2015/2015-01-27 File Size : 1167 kB File Modification Date/Time : 2015:01:27 09:38:57+01:00 File Access Date/Time : 2015:01:30 11:41:54+01:00 File Inode Change Date/Time : 2015:01:27 15:22:41+01:00 File Permissions : rw-r--r-- File Type : JPEG MIME Type : image/jpeg Exif Byte Order : Big-endian (Motorola, MM) Make : Apple Camera Model Name : iPhone 6 Orientation : Horizontal (normal) X Resolution : 72 Y Resolution : 72 Resolution Unit : inches Software : 8.1.2 Modify Date : 2015:01:27 09:38:57 Y Cb Cr Positioning : Centered Exposure Time : 1/618 F Number : 2.2 Exposure Program : Program AE ISO : 32 Exif Version : 0221 Date/Time Original : 2015:01:27 09:38:57 Create Date : 2015:01:27 09:38:57 Components Configuration : Y, Cb, Cr, - Shutter Speed Value : 1/618 Aperture Value : 2.2 Brightness Value : 9.203691496 Exposure Compensation : 0 Metering Mode : Multi-segment Flash : Auto, Did not fire Focal Length : 4.2 mm Subject Area : 1631 1223 1795 1077 Run Time Flags : Valid Run Time Value : 169926077986458 Run Time Epoch : 0 Run Time Scale : 1000000000 Sub Sec Time Original : 210 Sub Sec Time Digitized : 210 Flashpix Version : 0100 Color Space : sRGB Exif Image Width : 3264 Exif Image Height : 2448 Sensing Method : One-chip color area Scene Type : Directly photographed Exposure Mode : Auto White Balance : Auto Focal Length In 35mm Format : 29 mm Scene Capture Type : Standard Lens Info : 4.15mm f/2.2 Lens Make : Apple Lens Model : iPhone 6 back camera 4.15mm f/2.2 GPS Latitude Ref : North GPS Latitude : 47 deg 17' 16.46" GPS Longitude Ref : East GPS Longitude : 7 deg 56' 36.27" GPS Altitude Ref : Above Sea Level GPS Altitude : 432.9546539 m GPS Time Stamp : 08:38:46.72 GPS Speed Ref : km/h GPS Speed : 0 GPS Img Direction Ref : True North GPS Img Direction : 81.40054496 GPS Dest Bearing Ref : True North GPS Dest Bearing : 261.400545 GPS Date Stamp : 2015:01:27 Compression : JPEG (old-style) X Resolution : 72 Y Resolution : 72 Resolution Unit : inches Thumbnail Offset : 1980 Thumbnail Length : 5024 Image Width : 3264 Image Height : 2448 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Aperture : 2.2 GPS Altitude : 432.9 m Above Sea Level GPS Date/Time : 2015:01:27 08:38:46.72Z GPS Latitude : 47 deg 17' 16.46" N GPS Longitude : 7 deg 56' 36.27" E GPS Position : 47 deg 17' 16.46" N, 7 deg 56' 36.27" E Image Size : 3264x2448 Run Time Since Power Up : 1 days 23:12:06 Scale Factor To 35 mm Equivalent: 7.0 Shutter Speed : 1/618 Create Date : 2015:01:27 09:38:57.210 Date/Time Original : 2015:01:27 09:38:57.210 Thumbnail Image : (Binary data 5024 bytes, use -b option to extract) Circle Of Confusion : 0.004 mm Field Of View : 63.7 deg Focal Length : 4.2 mm (35 mm equivalent: 29.0 mm) Hyperfocal Distance : 1.82 m Light Value : 13.2
If you pop that GPS data into Google Maps, you’ll see I took this photo
when I was on the train in Switzerland:
(Actually the GPS data seems slightly off, because I was already about 200m past that position when I took the photo. I can only assume a laggy GPS location, since the signal is pretty bad and the train is moving fast. You can also see why snooping governments’ claims of “we only want to look at the metadata” is such a load of nonsense.)
You will also notice there are several tags that have a date stamp. There is a Date/Time Original or the GPS Date/Time (they’re different because of Daylight Savings time) plus the File Modification Date/Time stamps. (My favourite EXIF tag is “Circle Of Confusion” – sounds like my life.)
I ran ExifTool on my problem images and restamped all the File Creation Dates using the EXIF data.
WhatsApp with your EXIF data?
This was all good until I found a whole load of photos all time-stamped with 1999-11-30 12.00.00 and heading into a November 1999 folder. These turned out to be mostly images saved from people’s tweets or from WhatsApp. I’m not sure about Twitter, but WhatsApp deliberately strips the EXIF data from images presumably as a privacy measure, unless it’s just a bug/feature in the app or iOS. iOS 8’s new photo editing extension that lets third-party apps edit photos directly also strips the EXIF data.
An image sent via WhatsApp yields only this:
File Name : 1999-11-30 12.00.00-110.jpg Directory : /Users/andypolaine/Dropbox/Camera Uploads/1999/1999-11-30 File Size : 101 kB File Modification Date/Time : 1999:11:30 00:00:00+01:00 File Access Date/Time : 2015:01:30 12:01:04+01:00 File Inode Change Date/Time : 2015:01:28 09:18:22+01:00 File Permissions : rw-r--r-- File Type : JPEG MIME Type : image/jpeg Exif Byte Order : Big-endian (Motorola, MM) Orientation : Horizontal (normal) X Resolution : 72 Y Resolution : 72 Resolution Unit : inches Y Cb Cr Positioning : Centered Exif Version : 0221 Components Configuration : Y, Cb, Cr, - Flashpix Version : 0100 Color Space : sRGB Exif Image Width : 960 Exif Image Height : 637 Scene Capture Type : Standard Compression : JPEG (old-style) X Resolution : 72 Y Resolution : 72 Resolution Unit : inches Thumbnail Offset : 298 Thumbnail Length : 8676 Current IPTC Digest : d41d8cd98f00b204e9800998ecf8427e IPTC Digest : d41d8cd98f00b204e9800998ecf8427e Image Width : 960 Image Height : 637 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 960x637 Thumbnail Image : (Binary data 8676 bytes, use -b option to extract)
(This is post-Hazel renaming, so the filename and File Modification and Inode dates are based on when I uploaded the files to Dropbox, not the real ones of when the image was originally created.)
The lack of this EXIF data is an enormous pain, since a common practice when getting together with friends is to set up a WhatsApp group and share all the photos with each other that way. Of course my friends look at me blankly if I ask them to send the photos another way “because I need the EXIF data.” So, unfortunately all those image end up in 1999 and I have to manually sort them out of there again.
The only salvation would be to upload them to Dropbox on the day of the event and use that date, but that defeats the point of the whole exercise. If anyone knows a way around this, please let me know.
The video metadata problem
The other problem I had was with a bunch of QuickTime movies I had shot with my iPhone. I don’t know which app (Everpix?) screwed these file creation dates up. This time the Creation Date was 1st January 1970 on all of them.
I often used QuickTime Player 7 for transcribing because it has better controls (jog shuttle, playback speed, etc.) that Apple removed from later versions. I have an old QuickTime Pro licence too, which allows you to see individual track data and manipulate them:
I noticed there was a Track Creation Date annotation in there, which hadn’t been stripped out and had the correct capture date. Fortunately, ExifTool can handle videos too and extracting the EXIF data on the video gave me these tags (among many others):
Track Create Date : 2011:02:27 12:11:11 Track Modify Date : 2011:02:27 12:11:24
I could use one of these to set the rest of the metadata of the file. The following command extracts the Track Creation Date and writes it as the File Creation Date and also renames the file to match (and appends a lowercase file extension):
exiftool '-FileModifyDate<TrackCreateDate' '-FileName<TrackCreateDate' -d %Y-%m-%d_%H.%M.%S.%%e
This left me with a movie file that Hazel could correctly sort. I think “2011-02-27 1.11.11.mov” might possibly have the wrong time (or miscalculate the GMT offset), but the date is certainly correct. It’s good enough for me.
You can run ExifTool commands on individual files (essential when testing), but also on directories of files and it all happens pretty quick.
Now if only I could do the same thing to my brain.
The most common issue service design students face is project paralysis in the face of infinite possibilities and the synthesis of a mass of research material. Services are often complex and the interconnectedness of problems can soon appear too difficult to tackle. Taking the leap to tentatively develop an idea, and letting go of the need for it to be the best idea possible is often a real challenge, especially when the concept remains an abstract and complex.
I wrote a post over on Medium titled Getting From Here to There about moving from a hunch to research direction to concept. It looks at what the service design equivalent is of an architect’s rough sketches of a large project as opposed to the detail of a single touchpoint. It started as a mail to my service design students, but I thought others might find it useful in teaching, learning or practice too.
I’d love to hear your feedback in the margin comments on Medium.