Eye tracking: some thoughts from an ex eye-tracking researcher

There’s been a fair amount of discussion over the past week about eye tracking. As someone who used to do a bit of qualitative eye tracking research at Amberlight, (a great London-based UCD consultancy that I used to work at), I have a few factoids and opinions that I’d like to share.


1. Quant vs Qual eye tracking research – the key differences

Qualitative eye tracking research (“Qual”) involves observation and interpretation. First, the user is given a task while their eyes are tracked. During this time they are not allowed to speak (to discourage them from looking away from the screen). Then, after the task, the user is interviewed about their experience, and depending on your method, you can play the gaze trail video back to them and ask them to describe what was going through their mind. Analysis of Qual research involves touchy-feely “human” skills rather than statistical analysis. With Qual studies, you normally test between 8-20 users (roughly speaking).

Qualitative research is like a form of detective work. You look at the pieces of evidence you have available, and you try to fill in the gaps based on your past experience and expertise. It’s is a bit like looking around a crime scene and saying “Well the window is broken, the room has been upturned, it looks like a robbery.” But – the big but – you never actually get concrete conclusive proof that this is the case. This is the nature of Qual research, but don’t let it put you off – Qual is the cornerstone of most design research.

Quantitative, on the other hand, involves recruiting a large number of volunteers (e.g. 75-200), so you can collect enough data for statistical testing. This costs substantially more, and it takes a lot more time. Also, your research objectives are screwed down much more tightly – you control variables, you have hypotheses, and everything feels a lot more like “laboratory science”. The big differentiator is that although you get statistical evidence, you only get a sense of “what people did” but not “why they did it”. Qual, on the other hand, gives you a lot of “why” findings but no statistical evidence. With Qual, you have to put your trust in the expertise, experience and past portfolio of your researchers. When doing a Quant study, it’s fairly common to have a Qual element bolted onto it (e.g. interviewing the participants as well as tracking their eyes).

In summary: the point I am making here is simply that there are two types of eye-tracking, qualitative and quantitative. It’s important to understand the difference between the two, which leads onto my next point.


2. The “Qual-Quant confusion” problem

One problem with eye tracking is that the people who buy the service often get confused about what type they are getting. The high tech kit, the gaze trails and heatmaps look impressively scientific. It feels like the most rock solid evidence you’re ever going to get. In a qualitative eye tracking study, this just isn’t true. I’m not saying it’s fictional – I’m saying it’s just a picture of what a dozen or so people fixated on. In other words, it can useful but it’s just another piece of supporting evidence, like the verbal statements of your users, or your observations of their behaviour (e.g. task failure rate).

Going back to our “qualitative research detective work” metaphor, both eye tracking heatmaps and hand-written notes are like ‘clues’. They require human interpretation. The researcher has to sit there, scratch their head and think about what it all might mean.

In summary: in a qualitative study, eye-tracking data is no more valid or conclusive than your handwritten interview notes. Just because heatmaps and gaze trails are shiny, impressive and expensive doesn’t mean they hold any special “weight”.

3. Eye tracking tells you what users looked at, but not why.

Imagine you have run an eye tracking study on your site, and you now have a heatmap showing what 12 users looked at. You notice that there is a lot of heat on your proposition statement. Great, you think to yourself, they read the proposition! This means they understand what we are offering!

Actually, you’ve just made an assumption – that the more someone looks at something, the more they understand it. Actually, the opposite might be true. The heat may be a symptom of confusion – users might be re-reading your statement repeatedly because they found it hard to understand.

Just because someone looks at something doesn’t mean they understand it or like it. This is why eye tracking is almost always paired with interviewing where you try to find out what users were thinking. In short, beware of heatmaps – they can be easy to misinterpret.

4. Eye tracking often doesn’t tell you anything new

Another problem with Qual eye tracking is that often, it doesn’t tell you anything that you wouldn’t have found out through other means. Take a look at the eye tracking evidence provided by Luke Wroblewski in this study here. It’s a great article for educational purposes, but to an experienced eye, it’s quite obvious which would be the winning and loosing design patterns. Theory and principles like “affordances”, “visual hierarchy”, “call to action” and even Nielsen’s heuristics would all point you towards the right approach. And if you use “standard” user testing methods (no eye tracker, just a 1-on-1 interview where the user is given tasks and thinks aloud), you can test your designs without the cost of eye tracking – simply record task time, failure rate, and gather user feedback.

I’m not saying “don’t do qual eye tracking”, since it can be a great educational tool, it can uncover things you may not have otherwise noticed, and it can be a great weapon when trying to fight the user experience corner with a difficult stakeholder. But you’re unlikely to suffer if you don’t do it, and instead you opt for cheaper “standard 1-on-1 user testing” (by this I mean giving users tasks, asking them to think aloud and interviewing them).

In summary: if you don’t have a huge budget, and your research objectives don’t explicitly require eye-tracking, then you should seriously consider other options. Eye tracking can be expensive beyond its usefulness. Instead of doing a single, costly eye tracking study, it can be more rewarding to do multiple rounds of “standard” user testing (iterative design & research), and employ a collaborative process using a war room and design workshops. Involve your design team: research should be an intimate part of the design process, rather than something carried out by strangers and delivered via PowerPoint bullets.

2 comments