Cennydd Bowles recently argued on A List Apart that User-Centred Design “may be limiting our field”. I don’t agree, and I didn’t agree with Jared Spool when he said the same thing at IA Summit 2008.
Funnily enough, I agree with many of Cennydd’s and Jared’s individual points, but I disagree with the overall thesis that UCD is past its best. It feels kind of flame-baity to me. Back in the days when Devs used to argue about Agile all the time, Ron Jeffries wrote this this allegory about development processes and baseball. It was a joke about how a group of fictional developers read the rules of baseball, decided to tweak them a little bit and ended up playing a version of it involving a rolled up socks for a ball and very little physical activity, so to make it more efficient. It was easy but they didn’t have any fun, and they posted an angry rant online condeming baseball as “problematic”. Yes, it’s a daft story, but Ron’s point is solid: before denouncing Baseball, Agile, UCD or anything else, it makes sense to stop for a moment and work out if you’re playing it the same way everyone else is.
These days, UCD is seen as a pretty vague process. Everyone makes up their own rules and we all get different mileage out of it. Historically there’s been various efforts to formalize UCD, but most design groups keep it pretty open – you go through iterations of analysis, creation and evaluation; usually trying to involve real users in the evaluation activities. You start with broadbrush concepts and divergent, broadbrush research – then you hone in to detailed concepts and convergent, detail-oriented research. That’s it in a nutshell. It doesn’t somehow spit out innovative products when you turn the handle, but hey – it’s a process, not a fairy godmother.
About a year ago I did some consultancy with an agency who ran about 100 hours of usability testing on a shonky Axure prototype under the name of UCD. It must have cost them about $150k, with barely no difference in the design before or after. They said their client wanted to be extra sure that the design was highly usable, so they added more research – but somehow forgot about the analysis and design bit. There aint no cure for stupidity, but this isn’t the fault of any particular acronym.
Maybe I’m being boring here. I agree there is a lot of bad design happening out there, but does that mean we need to “look beyond” UCD? In fact, I think we should look directly at it. Let’s talk about the common mistakes and the flaws. Let’s evolve it. But please, let’s not coin any new terms just yet.
Thanks for responding Harry. A quick note, since maybe I failed to communicate my main message properly:
I don’t think UCD is dead, past its best or harmful. It’s the default thinking within that UX industry that I want to challenge. UCD in the right places at the right time is a fantastic method. In other places, it’s significantly inferior to other approaches. The blind adherence to UCD as the One True Way is what I think is limiting our field.
Hi Cennydd,
Would you be able to go in to more detail about the default thinking you’re talking about here? As a junior designing it’d be great to avoid common mistakes with the goal to move our industry forward.
Cheers
Hi DaveM, at the risk of being annoyingly circular, my original article best explains my thoughts. In short, the assumption that UCD is the best way to design is what bugs me. It’s one of many, with corresponding strengths and weaknesses. But if, say, you’ve solved the same problem five times before, research and testing can sometimes be a giant waste of time.
Hi Harry / Cennydd,
This is an interesting article, but I have to agree with Harry… mostly.
The problem that I have with this as both a practitioner and ID academic (I prefer interaction design to UX design for a number of reasons) is that it seems to support the tradition of mythologising ID that many practitioners are trying to dispel. You appear to be suggesting (and correct me if I’m wrong) that expert analysis is more useful than empirical analysis, when the truth is that they both have their place. I would never purport to suggest that I know more about how a user is likely to interact with a product than observation of the user doing just this would tell me, and I’d be interested to hear why you think this should be the case.
I agree wholly that expert analysis (including heuristic modelling and discount usability engineering) is hugely beneficial and can save a lot of time and cost early in the design process, but experts often don’t agree with each other on what will and won’t work, so it’s certainly not fool-proof (no suggestion there that experts are fools!). My suggestion, probably 90% of the time, would be to perform expert analysis early on and then observe the users with a low-fidelity prototype that supports the findings of that analysis.
Likewise, your discussion of scientific method (and I don’t really think of Millers magic number 7 and Fitt’s Law to comprise scientific method, more that they’re poorly cited and employed rules, or heuristics as far as the former is concerned) assumes a singular approach rather than giving any consideration to a more holistic approach or triangulation.
I do think this type of discussion is important. ID / UXD can’t stand still, but neither is the constant invention of new buzz-words and processes helpful.
Pingback: Top posts of 2013 | DICKLEUNG DESIGN 2013