Here are the slides from my recent presentation at UXLX’10 at Lisbon. This is a substantially revised version of the talk I gave at Barcamp Brighton in September ’09.
Many, many thanks to Aaron Young & Rebecca Gill of Bunnyfoot for carrying out all the eye tracking research featured in this presentation.
Thanks for that. I think that you can get a greater ROI on just doing more frequent cheap usability tests via “think aloud”. Which I think is what Nielsen and Krug suggest. For most small companies ET would be akin over-optimization.
Pingback: eyetracking studies - DesignersTalk
Thanks for sharing your presentation!
This is a great introduction on how not to use eye tracking – although seems a little biased as you don’t cover the correct use of eye tracking, which is with a retrospective protocol. In the interests of balance – you should also add in some slides about the major flaws of think aloud – ie. people don’t know why they do things, so what are they telling us?
Guy, why don’t you write a follow-up post on your own blog? I’d love to read it!
Pingback: “What You Need to Know About Eye Tracking†(new!)- 90 Percent of Everything | Martin Joosse
Great slides, a cheap way to do eye tracking on the web is to use mouse trackers (like http://www.picnet.com.au/met) I have actually written a comparison of eye tracking and mouse tracking technologies here if interested.
Thanks for the slides.
This is a nice presentation Harry. I’m putting it up at The UX Bookmark.
Guy, I want to let you know that both the CTA and RTA (what you refer to as retrospective protocol) are variations of think aloud. CTA/ Concurrent Think Aloud is what you are referring to as think aloud.
Both have a number of pros and cons which I went over with the Tobii and SMI guys including a few other folks at the India HCI/ IDID 2010 conference which I participated at as a tutor a few months ago.
Thanks Abhay, is there a write-up from the CTA/RTA discussion at the IDID 2010 event? Of course RTA is not eye-tracking specific (as I’m sure you know – but perhaps others don’t) You can still use RTA with plain vanilla observational testing (where the moderator doesn’t ask questions during the tasks, but afterwards). This is standard practice in quant studies where you want to measure task time accurately.
My pleasure, Harry. There’s no write-up on it available, to the best of my knowledge. I plan to write an India HCI 2010 review which would include this along with other stuff.
I agree with your note on using the RTA for accurate task time measurement. And for the benefit of other readers who may not know, it may also be used as an alternate to the CTA when participants are not comfortable with thinking aloud, which is what I have used it for in the past for considerably *shorter* test sessions.
Pingback: The Cosmonauts: Eyetracking may doom your research
Hey Harry,
I agree with Guy. Eye tracking is very useful. And you give a good summary of the analysis problems that inexperienced or poorly trained people have. However, the slideshow does fall short in describing how to use the tool in the right way, for the right reasons and in the right context.
Improvements don’t just come about by using RTA. It is great but there are plenty of other things that can be done to really leverage the power of eye tracking.
A few of us are working on a part 2.
JB
Really happy to hear you are working on a response to my presentation! I’d LOVE to see some shining case studies that show eye tracking in its best light.
Thanks Abhay, is there a write-up from the CTA/RTA discussion at the IDID 2010 event? Of course RTA is not eye-tracking specific (as I’m sure you know – but perhaps others don’t) You can still use RTA with plain vanilla observational testing (where the moderator doesn’t ask questions during the tasks, but afterwards). This is standard practice in quant studies where you want to measure task time accurately.
+1
So I missed your session at both uxlx and barcamp brighton? Many thanks for putting up the slides then, they pretty much give the idea. Cheers.
Hey Harry, hope you mind my blatant use of all your stuff in my post. I know you’ll get a track-back, just wanted to say thanks in person :) I’m following you on twitter too. Can’t believe I missed you for so long. Thanks for the great stuff! @lauracallow
No problem – all of the heatmaps from my presentation were kindly provided by Bunnyfoot anyway!
Pardon me, but why is RTA supposed to be the ‘correct’ way to use eytracking? I can’t think of anything more unreliable than someone post-hoc rationalising their own gaze trails to me. I’ve done a fair bit of eyetracking, made a lot of the mistakes Harry mentions, yet when I’ve applied ET to it’s areas of ‘useful’ applications and applied the methods as the evangelists suggest, I have to say it’s been no more useful to me than a quick and dirty person-to-person interview. Frequently less so. I suspect that the case studies that support claims for ET simply don’t exist, which is why no one has been able to provide one single URL for us here? I would LIKE to be proved wrong, come on….
I think most people agree that CTA (concurrent think aloud) with an interviewer is not suited to eye tracking, as people can’t help but glance across at the interviewer occasionally, which de-calibrates the machine and can cause weird artefacts.
In terms of RTA, various critics say that rationalisation and confabulation is an issue. Various advocates deny this.
One of the reasons that CTA is not suited for eye tracking is the simple fact that people look at what they’re talking about. So there’s little point in doing any data analysis if the user has been chatting to the facilitator whilst eye tracking…… but you can produce some strong ‘f-patterns’ with CTA if that floats your boat :-)
Modern eye tracking systems do not ‘de-calibrate’ if the user looks away, that’s a myth. We watch users constantly looking at the keyboard during research sessions and then looking back at the screen. And I’m not sure what these ‘artifacts’ would be?
I’m quickly coming to the conclusion that using the term ‘RTA’ to describe the retrospective session is misleading. Tradtitional RTA may rely on the user rationalising their thoughts – just as you could argue CTA does, but eye tracking is completely different because you are having a granular conversation about how a user viewed the page by playing back their eye fixations. The user isn’t rationalising – they are recalling their behaviour. And before somebody mentions the ‘missing the ketchup in the fridge’ argument – with good facilitation, a user in a retrospetive session would tell you that they didn’t see something, even if they were looking at it.
Pingback: Mis bookmarks (ii) - Chavalina. Diario
Reading the comments here, I can’t help but conclude that one reason the eyetracking debate is currently going round in circles is because neither side – it has to be said – is particularly good at using evidence to back their claims.
A quick, and very un-scholarly review of the literature reveals a few papers you may be interested in. Contrary to my own assumptions, there is some encouraging (but not conclusive) evidence for RTA. It seems we still lack a robust body of knowledge on the subject.
One problem is the studies are all of varying quality and not directly comparable. However unlike my opinions, these are findings of real studies that have undergone peer review.
Academia is a bit of a dirty word in UX these days, but this is one of those times when what you need is good, robust evidence to help you make up your mind (Note I haven’t included any of Jared Spool’s arguments, because although I happen to instinctively agree with him, he hasn’t published his findings as far as I know).
For your interest, here are some of the more relevant studies I found (not intended to be exhaustive):
http://bit.ly/9AiBkn
http://bit.ly/bPmgYy
http://bit.ly/9Cf2ox
http://bit.ly/dp41mj
Pingback: O que você precisa saber sobre eye-tracking | Arquitetura de Informação
Pingback: Prezentacja – “What you need to know about Eye Tracking” : UX Labs
Pingback: Подборка лучших поÑтов за 2010 год от 90percentofeverything | Raketa – блог о реактивном IT