Floating Box

AI getting sneaky

Recently, I wrote about the games that AI’s play, gaming training environments as logical responses to the reward and assessment stimuli they had been set. Another one has now been hiding data in order to use it later. Devin Coldewey explains in a TechCrunch article that the CycleGAN was being trained to turn aerial photographs into street maps and then generate synthetic aerial views from those street maps. The aim being to train the AI to generate aerial views from other street-maps. But the agent was not being graded on the conversion, but simply on how close the generated aerial map was to the original.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

It was then able to reconstruct a convincing aerial photo from the hidden data (tiny colour shifts in pixels that human eyes couldn’t perceive). Essentially, it took a crib sheet into the exam. No actual learning required.

Learning from learning

In that AI gaming article I talked about this effect often creating “structural stupidity” in corporate cultures, but it’s actually well-known in teaching and learning literature and what John Biggs calls assessment “backwash.” Students don’t do the work that the teacher says is the aim of the course.

Students do the work they know they will be assessed upon. You can physically witness it when handing out a course outline—most students immediately flip to the back to see what the assessments are. From a human-centred perspective, this is perfectly natural and obvious behaviour, because it’s how we prioritise our time.

This is why Biggs developed constructive alignment, an approach to aligning curriculum aims with assessments. It sounds obvious—you’re going to be assessed on the aims of the course—but often goes out of alignment due to the approach and construct of assessment. A classic example is how multiple choice exams assess rote learning more than actual understanding.

I learned about this years ago when learning about teaching in higher education (Biggs’s original book was published in 1999) and it has always struck me as crazy that business management, with its obsession with targets and KPIs rarely understands a simple principle:

What you measure drives human behaviour focused on the metric, not the outcome the metric is trying to measure.

Sometimes they are aligned, of course, but often not, mostly because organisations don’t know how to define value, as Jeff Gothelf recently wrote. I most often witness backwash with CEOs and CXOs obsessing about raising NPS scores and App Store ratings without wanting to know about the underlying experience driving those ratings. After all, why bother reading the comments when you’ve got a number to focus your attention upon?

Perhaps we shouldn’t be surprised that this is the case. Departments of Education have themselves become obsessive about auditing, despite decades of evidence that it’s misguided.

Science’s blind spot

A similar and frequent tension between design and business (both terribly nebulous terms) is often framed as soft versus hard or fuzzy opinion versus hard truths of numbers. Experience versus business. Art versus science.

Recently a client wanted evidence for the ROI of customer experience. But customer experience is not an add on, it’s all there is. Without customers there is no business. This creates a blind spot for companies, since its very ubiquity makes it invisible. Asking for an ROI of CX is like asking for an ROI of the company’s existence.

The tension between science and experience was elegantly argued by two physicists and a philosopher in an Aeon article called The Blind Spot recently. (Side note: Three different authors, Steve Diller, Nathan Shedroff and Sean Sauber, also wrote a book called The Blind Spot covering much the same issue in business).

I can’t do the Aeon article justice without quoting the entire thing, but the essence of it is that we falsely believe science gives us an objective, God’s eye view of the universe, when, in fact, we can never escape our experience of it. The authors argue that this means “objectivism and physicalism are philosophical ideas, not scientific ones.” To get a flavour of the argument, here is their analysis of the scientific method quoted at length:

In general terms, here’s how the scientific method works. First, we set aside aspects of human experience on which we can’t always agree, such as how things look or taste or feel. Second, using mathematics and logic, we construct abstract, formal models that we treat as stable objects of public consensus. Third, we intervene in the course of events by isolating and controlling things that we can perceive and manipulate. Fourth, we use these abstract models and concrete interventions to calculate future events. Fifth, we check these predicted events against our perceptions. An essential ingredient of this whole process is technology: machines – our equipment – that standardise these procedures, amplify our powers of perception, and allow us to control phenomena to our own ends.

The Blind Spot arises when we start to believe that this method gives us access to unvarnished reality. But experience is present at every step. Scientific models must be pulled out from observations, often mediated by our complex scientific equipment. They are idealisations, not actual things in the world. Galileo’s model of a frictionless plane, for example; the Bohr model of the atom with a small, dense nucleus with electrons circling around it in quantised orbits like planets around a sun; evolutionary models of isolated populations – all of these exist in the scientist’s mind, not in nature. They are abstract mental representations, not mind-independent entities. Their power comes from the fact that they’re useful for helping to make testable predictions. But these, too, never take us outside experience, for they require specific kinds of perceptions performed by highly trained observers.

For these reasons, scientific ‘objectivity’ can’t stand outside experience; in this context, ‘objective’ simply means something that’s true to the observations agreed upon by a community of investigators using certain tools. Science is essentially a highly refined form of human experience, based on our capacities to observe, act and communicate.

All this should, by now, sound familiar. AIs gaming systems blatantly highlights how much we overlook our subjective, tacit understanding of those systems. Metrics and assessment ignore the human behavioural experiences underpinning them. Behaviour in context is crucial, which is why Fjord work a lot with mindsets.“We can’t step outside the box in order to look within, because the box is all there is,” write Frank, Gleiser and Thompson. So we had better get to know the box, which is what everyone from the phenomenologists to C.G. Jung spent their lifetimes urging us to spend our lifetimes doing.


This post was originally written for Doctor’s Note my newsletter containing a mix of longer form essays and short musing on design, innovation, culture, technology and society. You can sign up for it here. It’s first public posting was on Medium.

Photo by Christian Fregnan on Unsplash

Written by