The reconstructive nature of human memory (and what this means for research documentation)

Here’s a classic piece of psychology research that should get you thinking about the strangely malleable nature of human memory: Loftus & Palmer (1974) on the reconstructive nature of human memory (PDF).

The research paper is pretty dry, so I’ll summarize the best bits of it for you here: Loftus and Palmer recruited 150 students from their campus and showed each of them a film of a traffic accident. Immediately after the video they gave them a questionnaire, containing a load of dummy questions and one question of interest.

A Car Crash

The participants were separated into 3 conditions (Experiment 2 – emphasis added):

  • 50 were asked “About how fast were the cars going when they hit each other?”
  • Another 50 were asked: ‘About how fast were the cars going when they smashed each other?‘
  • The remaining 50 were in the control condition and weren’t asked this question.
  • 1 week later they were all asked “Did you see any broken glass? Yes or No?”

Significantly more people in the “smash” condition answered yes – that they remembered seeing broken glass in the video (p < -.05). The thing is, there wasn’t any broken glass – none at all. They only difference between this group and the others was that they were primed with the word “smashed” in the questionnaire, one week prior to answering the question.

So, changing a single word in the questionnaire actually re-encoded their memories with new details. Think about this for a minute – it’s quite scary how a single word can have such a big effect. We’re not just talking about fading memory here – we’re talking about clearly remembering something that didn’t happen, as if it were a fact. Luckily it only happens in certain circumstances, but it’s been a hot topic in legal psychology for many years – police line-ups even have special guidelines nowadays, in an attempt to minimize the effect.

OK, now lets think about how we go about doing qualitative user research. Most people these days use some variant of guerilla usability testing, where the emphasis is on being rapid and informal. For example, after a day of user testing you probably do some quick analysis and iterate your prototype before entering another day of testing. After one or two iterations, you usually feel you’ve tackled the big issues, so you implement the prototype and deploy it on your live site.

This is a great method for many reasons. It’s fast, engaging and collaborative- but it’s produces hardly any documentation. Say a few months have passed and you want to revisit an area that’s already received some design research attention. How can you refresh your memory of what drove you the current design? Do you go back to the video? 5-10 hours of video footage is a daunting prospect and feels like a huge waste of time. Most of the time you just rely on your team to remember it. Maybe they’ll have some sparse notes or a PowerPoint deck that contains interpretations of the findings – but not the actual unarguable, factual behavioural observations. One way or another, it’s likely that they have re-remembered the research in question so many times that their memories and design ideas have all been mushed together.

If you’re an experienced UX researcher, you’ll be familiar with the scenario of “not remembering things the same” as your colleagues, or sometimes getting stuck in a pogo-stick redesign, where you move back and forwards between alternatives. In lieu of a decent record, everything becomes very blurred and you get what Avinash Kaushik calls the HIPPO effect (Highest Paid Person’s Opinion), where the phrase the “user testing showed…” becomes arguable rather than factual.

What’s funny is that before guerilla testing became fashionable, back in the days when it was called HCI, UX research was an incredibly heavyweight and expensive affair. User behaviour was measured with scientific accuracy. Studies took a long time to carry out, and they were usually documented in exacting detail, to allow anyone to reproduce the study in the future if they cared to. This was the legacy of Cognitive Psychology research, and it took years for the UX research industry to shake off the burden and become cost-effective.

The trouble is, at least for some of us, the pendulum has swung back too far. Documentation may be boring, but if you’re going to evolve a product over the long term, you need to have a clear view of where you’ve been. Interestingly, this problem doesn’t really exist with quantitative research: analytics, A/B testing and remote testing all tend to provide accurately recorded research data that’s easy to refer back to. The problem mainly seems to lie in qualitative research, usability testing in particular.

The thing is, we all do our research differently. How much documentation do you produce? What kind of note taking and analysis techniques do you use? Are stuck in an organisation that still does old-school, heavyweight documentation? It’ll be really interesting to hear your methods in the comments, below.