—We always include an open-ended question to test people’s motives for coming to the site. If someone responds to the question “Why did you come to the site today?†with a vague answer like “To check the offerings†or “Just looking aroundâ€, consider that a yellow flag, and follow up with more specific interview questions.
—When we get a sudden surge of recruits, we check the referrer data in our recruiting tool (Ethnio) to confirm where users are coming from.
—The blunt honesty approach you mention in the post also works for us: making it clear that they won’t receive an incentive if it’s discovered that they don’t qualify for the study. Sometimes we’ve added questions like “Have you been completely honest in filling out this survey?” And you’d be surprised how many people say No.
—Live recruiting from the web is a big help for dealing with fakers, because on top of screening the recruits, you can also dismiss fakers with impunity, because there are always more valid recruits coming in. This is a benefit to remote user research methods in general: since users don’t have to travel out anywhere and there’s no advance scheduling, the stakes of dismissing an unqualified user are lower.
We talk about live recruiting on our website here: http://boltpeters.com/services/recruiting/
And we also go more in-depth on screening recruits in Chapter 3 of our Remote Research book: http://www.rosenfeldmedia.com/books/remote-research/
]]>“JFW is a Java framework and library that simplifies the creation of web based Java applications. ”
http://sourceforge.net/projects/jfwprj/
In a recent study I did using one of “those” agencies, I recognized one of my participants from a session I happened to oversee at a company across town a few days previously. My screener had specified strictly no people who had attended market research in the past 6 months. I confronted the recruitment agency about it and they said they had to bend the screener in order to fill my quota (without warning me). No apology, no refund. That kind of attitude makes me really angry!
How do you source your users, then?
]]>And what I saw when doing ethnographic fieldwork in one such org (iSociety project, resulting in the Technology in the UK Workplace report) did not impress at all. Deeply flawed.
]]>The funniest response to a simple open question we’ve had was ‘none of your business!’
Why would you say that when applying to attend market research sessions – especially after investing time to fill out all the other details.
]]>An effective tactic I’ve used to avoid the fibbers is to ask the same question in different contexts: within the web-based screener and then during a follow up phone call. I typically pose two of these “test” questions to gauge honesty (they may or may not be pertinent to qualifying for the test).
For example, I find age to be the number one thing people “lie” about. Many times an initial web-based screener will ask for someone to specify their age range. You can follow up by phone and ask them their specific age and/or birthdate. That’s when they get “caught”.
It’s also good to ask at least one open-ended question to get a feel for their character and how they communicate. After all, they will be required to share and communicate a lot during a testing session.
If after the phone screening, their qualifications are at all questionable, I would eliminate them from the pool. If you’re lacking a large pool of qualified candidates, I would have a second phone conversation asking your test questions again. Repeat conversations / meetings generally are a good practice in general, from participant screening to conducting a job interview. Whenever character is in question, I find the third round of meetings/interviews to be telling.
]]>We rarely let the user know who the end client/website is before the research session, as they usually prep and learn how to use the site before hand – just as you would prep for a test – which means you’re not observing natural behaviour.
]]>