Quite a few people in the UX industry have been moaning – myself included – about the demise of critical thinking, and the fact that people don’t question what they read before accepting it as solid fact, particularly if it comes from a well-known source.
So, what about you? Are your critical thinking skills up to scratch?
This press release / research report from Vibrant provides a nice, easy exercise for you. See if you can list the top issues. Suggestions in the comments please.
How many participants were tested? Percentages are used to disguise the numbers. What exactly were they asked to do? All very vague!
Words like “who saw the ad” and “users that clicked through” lead to the conclusion, that the only thing measured was the effectiveness of the ad itself and not of the in-text solution. To measure this one needs the number of people who moused-over the texts and the number of people who didn’t. Those would be revealing.
I’m also wondering where they got the “9x” from.
Hi Harry,
some comments:
– first, it’s not clear whether the study was conducted in a lab setting or not; I mean, if the people were invited to a market research conducted in a dedicated facility, it is easy that they’d paid more attention to the page text than they would have normally done;
– What about the size of the groups?
– What were exactly the conditions of the two groups? it’s not explained – e.g. was “being exposed to the ad” the only relevant difference?
– No measure of the confidence in the results is given;
– “Who saw the ad” refers to whole exposed group or just to a fraction?
– “Users who clicked were most engaged”: is it a sentence with an amazing informative value? I mean, it seems kind of obvious to me to expect people who click through as more interested;
– “Users who clicked were most engaged”: I suppose that the percentage are made on different sizes (users who clicked vs. the control group): again, there is no measure of statistical relevance of the comparison;
– About the share of mind: again is the 6% uplift really relevant from a statistical point of view? Or just got by chance.?
About the suggestions:
– I cannot say that Vibrant fails in its communication, if their purpose is to convince people that the in-text ad works. But, if we assume the perspective of “critical thinking” then we should add information like the sample size, the conditions of the experiments and at least the confidence interval.
But again: I suppose that people at Vibrant, aiming to prove “they are right”, preferred a more straightforward communication rather than overloading the audience with this “technical” information.
thanks for the post!
cheers
carlo
“Brand awareness†means the subjects remember your irrelevant logo got in the way of their reading something. “Increase in purchase consideration†and “improved competitive position†mean that the subjects are paid to say what they know Vibrant wants them to say: spam works.
Seems to me (though the press release was vague) that one group saw no ads while the other group saw the in-text ads. If that’s the case, then of course the group with some ad exposure will have higher recall/conversions than the group with no ad exposure. But this proves nothing about the effectiveness of in-text ads relative to other ad forms. Nor did the press release address potential negative consequences of in-text ads, like decreased user satisfaction or usability.
“respondents who were given simulated exposure to the Vaseline in-text ad”
Simulated! So, they contrived a made-up test and then used it to pretend like they collected real, meaningful numbers.
Lots of good specifics that people have pointed out. I would like to draw attention to a more encompassing weakness in the ‘study’.
A scientific approach means taking a cold, dispassionate view of the question at hand to understand what’s happening, removing as much bias as possible – acting more like a judge seeking the truth rather than a lawyer gathering evidence. This should help avoid a methodology geared to finding desired results.
Although this is clearly a piece of marketing so in this scenario they’ve probably done a reasonable job!
I do not know why they use all the smoke and mirrors. These intext ads are leffective at grabbing attention, in the same way that rollover ads, pop-up ads and auto-play video ads are effective.
With this effectiveness comes irritation, and if it reaches a certain level, users will leave the site and never come back. This is the real issue. It would be easy to run an experiment that shows the short term impact on awareness. But the long term effects, that’s the big question (like Ethan said). It is also obvious what most big publishers think the answer is. Very few run intext ads.
Have you actually seen the ads they are taking about?
http://j.mp/f6df0j
If you accidentally move your cursor over the double underlined link, you trigger a pop up with a video in it that automatically plays, with loud audio.
Bleurgh.
A major item that sticks out to me…
IN-TEXT IMPROVED COMPETITIVE POSITION
Saying that the brand position improved because the control group was x% and the test group was x% is inaccurate
– they should have interviewed the control and test groups PRIOR about their feelings on the brands and then again after.
– just because the control group didn’t see the ads doesn’t mean they had the same initial feelings about the brands as the test group. It’s possible that the test group had a higher opinion of vaseline to begin with
@Amy
If the two groups had sufficient size and the participants were randomly allocated to the control and test groups it wouldn’t be necessary to check the baseline of each prior to the intervention.
– There’s a big difference between ‘saw the ad’ and ‘willingly evoked the ad’ for a start
– if I tell my 7-year old we’re playing ‘spot the difference’, she’ll spot the difference (i.e. how neutrally was the scenario presented)
– depending on your user-group, a double-underlined link may evoke mouse-over interest just to find out what it does (not everyone yet knows it’s likely to be a sponsored link, some expect a definition or a ‘more’ link)
– did they vary the wording of the link? ‘Skin condition’ has different overtones from ‘youthful complexion’ for example
Personally, the minute I see a page with double-underlined links I a) downgrade my expectations of that page’s content b) skirt round the links at all costs when mousing around and c) curtail my visit to the site. So you could say yes, my awareness was raised – but for all the wrong reasons
In the scenario presented, the very choice of link wording – ‘skin condition’ – is designed to appeal to those for whom that may be a concern – after all, if you have a baby-soft glow from head to toe like me, why would you care to click? The second paragraph focuses on abnormally dry skin, not ‘winter dry skin’ – so that page would (in search) draw in people with skin problems. So if you have clicked on the link, you are more likely to be in a group for whom “Good news, everyone!” (< my Hubert J Farnsworth impression) purchasing the product is a possibility.
In short, there is way too little data on the nature of the user group, the context in which the tests were run, and the polarity of the awareness (+/-)
But there are some nice green upward-pointing arrows :)
Pingback: Think and don’t assume anything | A Risk Management Manager