As we approach conference season, I have been reviewing papers in recent weeks and I can’t help noticing that the new breed of ‘UX’ practitioner often eschews peer-reviewed research in favour of blogs and sound bites from gurus and commentators.
This is a shame as it devalues proper research and allows the HIPPO effect to creep in. The great thing about your post is that it highlights how good old fashioned HCI research helps removes ‘I think’ from our vocabulary.
]]>And the thing that interested me in your post is that it points up that all too frequently, it’s not about the content you’re testing (in this case, the video), it’s actually the way you test and ask the questions.
Not sure what it’s like in UX but there is a real lack of science going into the creation of the questions/set-up, when compared to the ‘scientific’ weight applied to the findings…
]]>Discussions and debate can happen with the participants too that we observe. Starting with clarifying questions, we can begin to hypothesize with participants and get their reactions to our own understanding. As a result, we begin to validate some of our information. With a larger sample, we can start to increase the credibility (reliability) of our claims. This is a reflection of what Roger Martin discussed in his talk years back about finding a balance that speaks to business and design needs: http://vimeo.com/5274469. But I digress.
]]>The team, after completing field research and cataloging observations (with codes) on worksheets, would make claims. Each claim made required evidence of that claim, but other team members were encouraged to support or refute that claim. The claims and evidence became pieces to a “white paper” which we used as the basis for a more thoroughly spelled out report including design mocks, process information and other “usuals.”
While documentation is a big problem within a singular project, there’s a larger problem in my mind with bridging and connecting insights across projects that have similar qualities.
]]>