I can only guess that it version 3 was the best in terms of conversion because this is the page they have on their live site now.
]]>you are asking all the right questions. This wasn’t my own study – I am just reblogging work done by Maxymiser.
There are more details about the study in previous post.
Maxymiser assure me that the study reached P<0.05 i.e. 95% confidence level in this study.
The conversion measured was clickthroughs into the checkout process.
]]>Additionally, what statistical significance did you reach? 90%? 95%? 98%? 99.9%?
]]>All versions run concurrently, so seasonal variations are not an influence either as far as I’m aware.
]]>To address some of the points that James raises at the top of the thread, the Maxymiser system monitors statistical chance to beat all percentages to ensure that statistical validity is reached before we draw conclusions.
We also identify new vs returning visitors so that a returning visitor always sees the same variant – important for usability and the stats.
We can further test changes on multiple pages, something many of our competitors struggle with, making multi page form testing possible.
Delivery cost is a tricky one, you’re right on that point. I believe in this particular test that the client was interested in discovering the point at which this should be presented in order to minimise exit rates. Multivariate testing can be used progressively alongside funnel analysis to push more traffic through the funnel one page at a time.
Feel free to contact me privately or follow up here with any questions, I will monitor this page for a few weeks.
Kind regards,
Alasdair (Maxymiser)
You need to provide lots of information before you see the delivery costs.
I’d be interested in seeing how a page worked just like version 3 but with an intelligent/dynamic and simple provision of delivery costs up front.
There are two factors I’m really interested in.
Is it the complexity of the delivery costs that causes the problem. Or is it the fact that they are introduced so early in the process?
If the delay of delivery costs were to cause an uplift in conversion then that really would be interesting.
I know, I should get out more.
]]>It is facinating to see the results. Can anyone explain the difference in performance between Version 5 and Version 4?
]]>Also, it’s clearly a useless method if you don’t have a live site with a healthy user-base.
]]>A big caveat that you don’t mention is that this sort of testing only works on high volume sites. To get a significant result each variant would need many exits. This depends on the uplift of each variant. This is to make sure that you are not measuring noise instead of real behaviour. The test also needs to be done over a short period of time to make sure that your results are not influenced by seasonal factors.
The other negative is that user can be faced with a changing user interface. If I log in from home and the checkout button has moved around you may lose the sale.
This method of testing is great for working out very small changes on very large volume sites.
]]>