The challenge lies in designing the scenario,not just building. It’s traditional method and quite successful. You make him run and make him drink beer and make him travel train and put him in a shady room, then test it, only if, your product designed for that scenario. You can learn a great deal from movie direction or from pose photography. After all, all fields are connected.
]]>I am consciously misunderstanding you to make a point: when you conduct usability tests (at least when I do), I am doing testing whether or not my subjects _understand_ the interface they are operating with. Can they find their way around? Do they understand what we wanted to communicate. Is the subject looking for functionality we didn’t think of – or is he or she using the site in a way we didn’t think of?
I believe that you should always have more than one source… just like you say. Other sources could be Analytics – or even better, I think, software like userfly.com, clicktale.com, or mouseflow.com.
]]>You might also want to have a look at Ollesch, Heineken and Schulte (2006), which deals with the question of the virtual experimenter in online research (http://ijis.deusto.es/ijis1_1/ijis1_1_ollesch_pre.html, Disclaimer: I am the “Schulte” :))
]]>Instead I think he’s gone back to over past research and used the stats as an example of how real behaviour and behaviour in usability testing can differ.
I don’t think you’ll ever totally get around this problem, regardless of how experience a test facilitator you are. It’s better to accept that all research has its drawbacks.
Jen, usability testing isn’t expensive. It’s as expensive as you make it. A/B testing has it’s limitations as well. Just as every type of research does.
In my experience usability test findings are dominated by very real and (in hindsight) obvious looking issues. Usually because things were misunderstood or went unnoticed. These aren’t the type of findings that can be explained away by the Hawthorne effect.
]]>For me the best judge of how pleasing people have been is if they change their tune once they know the session is over. You could call them ‘usability out-takes’ I suppose.
The camera has stopped rolling, they’ve got their cash, you’re walking them to the door and out pops a total clanger that you wish you’d had on film.
There are definitely ideal contexts and approaches to moderating qualitative interviews, but sometimes we have to make compromises.
E.g. running sessions at the client’s office will almost certainly make participants less likely to be honest, but may mean more stakeholders are able to attend and muster buy in where it is needed most.
]]>Of course there is an overlap, such as Harry’s great example of the T&Cs. In these cases we can compensate by being aware of this bias beforehand and compensating for it in our reports.
Another trick is to ask users specifically for facts when they make a clain that you believe to be without foundation. For example, when did they last read a T&C at home and what can they remember about it. I find the truth is ascertained as user ‘fess up’ to not normally doing such and such.
If I may list tips from Nick Bowmast I am sure he wouldn’t mind. These are a few techniques to mitigate users’ desire to please the moderator, by being self-depreciating:
Avoid fancy swanky ‘designer lounge’ type facilities where the participants get a choice of hot beverage waited to them.
Dress down.
Play down your role.
Play down any techy kit.
Carry absolutely no air of importance “I’m just there to take notes and perhaps ask a few questionsâ€
Try not to mention design or designers.
Don’t seem too interested in the outcome of the interview.
Ask them to be selfish, imagine that the product etc. should be made just for them, nobody else.
Make minimal and only neutral comments like “I see†… as opposed to “good†when acknowledging comments.
Try to maintain a distance and position that lets you slip out of the participant’s viewpoint (so they can forget you are there)
Whether it’s face-to-face or recorded stats, looking at the base results without proper analysis and interpretation will lead to misunderstanding.
The expert should consolidate the findings then present the information in a sensible, understandable manner.
]]>Is AB testing the future? Cheap, accurate, easy to set up… and no experimenter effect. That’s a good debate to be had right there.
]]>