There’s a persistent myth about guerilla user research that it’s perfectly OK to grab just anyone to act as a proxy for your users. Perhaps it’s something to do with the whole low-cost, lo-fi ethos that makes this myth so easy to believe.
Actually if there’s one thing you shouldn’t cut corners on, it’s your recruitment. User testing should mean real end users, not Bob from accounts or some random dude walking down the street. Let me explain why this is so important with a case study.
Picture this: a large institution designs and builds a decision-making app for their customers. Their customers are sales-staff in small, partner companies whose job it is to resell the larger institution’s products to joe public, along with a commission. Now, during the early stages of design they thought they were doing all the right things. Wireframing, low fidelity prototyping, and usability testing – but here’s the crunch – they did their user research on their colleagues within the institution. They included middle management, data entry staff, the receptionists, and even some of the staff in the canteen. In other words, everyone but the sales staff in the partner companies who the product was really for.
And so they ended up with a product that was really usable – even a naive first-time user could use the system to punch in the data and get product recommendations out.
At this point, having spent about half a million dollars, they were sure they were onto a winner. So, they decided to get some real end-users in for face-to-face research sessions. The first few users were very polite since they had strong professional and political relationship with the institution making the product. But then one of the users, an admin assistant, commented that they would never use this system in her workplace. Her boss would do all of this work on paper and in his head. Then, he’d give her his calculations and she’d double check them in Excel. Apparently, his figures were almost always spot on.
Sure enough, when all the other users were asked about their current practices, they admitted that they also did all their decision-making in their heads. In fact 3/4 of them said they’d never use the app, because it would actually increase their workload. The owners of the product suddenly had a crushing realisation. The app was highly usable but simply not useful – it was solving a problem that didn’t exist. In other words, they were screwed.
And so ends today’s lesson. Always carry out research on real users, or you might end up like them.
Another key point here is that you don’t define the competitors to your product – the users do.
I agree, to an extent. I believe it’s possible to get insight from non-representative users… but only on aspects more granular than perhaps high level requirements or needs as in the case you describe.
In fact, the case study seems more like a case of poor requirements analysis. Building software that has no use is a different failure to that of building useful, but difficult to use, software.
@Howard – yes, non-representative users can give you useful input on baseline usability, but my point is, why bother? If you’re going to spend all that time and effort doing research, you may as well get the right people in. The price of an outsourced user research project with the wrong users vs one with the right users is typically less than a 10% difference.
Your point about requirements analysis is of course true – but the idea that requirements analysis is a phase that is timeboxed right at the beginning of a project, then is never thought about again – that’s an old school software engineering concept. When you’re using a UCD process, requirements analysis is done mostly at the beginning of a project, but then continues through the course of the project in a diminishing manner as you iteratively prototype your product. Starting off by showing real users early low-fi prototypes is an excellent way of validating and nuancing your requirements.
“but my point is, why bother? ”
It depends entirely what you’re trying to develop. Especially with websites built for non-specialized audiences, there is a mountain of useful information you can get from non-representative users.
You bother if you don’t have the budget or time to perform a formal test. In other words, if it’s the difference between doing no test and a test with non-representative users, perform the test.
There are many clients who still subscribe to the myth of the genius designer (or the genius in themselves), so they’re reluctant to invest the time and $$ for testing.
Absolutely! You definitely want to make sure that you’re talking to the right people, but I think that the less specific your application the more general you can be in your choice of representative users. For example, if you were working on interfaces for a new kind of TV, you could conceivably recruit random dudes from the street on the basis that he or she would most likely have a TV or have used or watched one. As always, It depends (TM).
Pingback: Are you finding the right participants? « User Experience for Non-Profits
Pingback: Top posts this year on on 90percentofeverything.com- 90 Percent of Everything
I have to agree with Cam Beck here. There are however some sites that I wouldn’t even bother testing if I couldn’t recruit reprasentative users.
The big problem I experience is clients taking a market research attitude to recruitment. They then get all intricate about things that won’t make a blind bit of difference.
sorry i can’t spell
I think it’s a real mistake to design a product or service for “everybody” without taking the time to deconstruct the user-base.
If you genuinely are in the business of designing for a very wide user base, you should segment your users to make sure you are addressing all their needs.
Having said that, if you have really poor usability, pretty much any old random person can be used to point that out. Right?
Pingback: FeraLabs » Blog Archive » Does culture effect online behaviour?
I am glad you brought this up, because when I speak to folks I very often hear that this is the kind of testing they do, and it always makes my skin crawl. I guess it can have its value in the sense that if you get folks with the appropriate level of “Internet savvy”, you can test for the most basic usability with specific tasks such as searching for a specific keyword or seeing if they understand a design pattern, but you’re not going to find out if the product is useful or desirable, nor if you’re asking the user to conduct the tasks that they would really conduct.
Pingback: Response to 10 Most Common Misconceptions About User Experience Designcontent strategy, Information Architecture, user experience, UX « Brain Traffic Blog
Pingback: Does culture affect online behaviour? « Webnographer Blog