02 8413 6400Mon. - Fri. 9:00am-5:30pm

Running an A/B Test

Posted by in Conversion Optimisation

So by now, you know what an A/B test is and how to set one up. The next step is to actually run the beast to get some good, meaningful results. The biggest issue will be obtaining results that are statistically significant. Running the test the right way will ensure you get good info and avoid any common pitfalls.

What Should I Do?

  • Find the balance between too testing for too little and too much time – stopping a test too early will give you results that are not statistically significant. At best, you will have a result that is not quite proven. At worst, you will have run an A/B test that makes your site worse. Conversely, running a test for too long will cost you conversions from the poorly performing variation. Don’t fret though – you can work out how long to run a test by using an online calculator.
  • Maintain page consistency for visitors – if a visitor comes in and sees variation A, use cookies to keep showing them version A. Showing them version B will only work to confuse them. Also, since they are visiting the same site for the second/third/etc time, they will have already formed a judgement of the site and this would affect your stats. Also, it will prevent showing them contradictory information such as voucher numbers or discounts that you are testing.
  • Maintain sitewide consistency for visitors – similar to the above point, show re-designed elements (e.g. buttons, testimonials, calls to action) consistently across the whole site to ensure results do not get skewed.

What Should I Avoid?

  • Test simultaneously – don’t test the control one month and the variation the next or you will have seasonality and trends skew the data. If you test one variation one day vs the next, you will have weekends and peak days skew data. If you test different variations at different times of the day, you will also get different trends. So split traffic during the same period and do it randomly or 1-for-1.
  • Go the distance – like many things in life, finishing too early is never a good thing. Use the statistical significance indicators in your tool of choice, or an online calculator if you chose a welfare A/B testing tool.
  • Only test on new visitors – return visitors have already made their mind up (to some degree) about your website and their opinions are therefore less fresh, which will lead to…that’s right, skewed data! Besides, it would be downright rude to mess with the expectations of your loyal repeat customers by giving them the false hope of a new website design. So only show the site to new visitors.
  • Ignore the data for “best practice” or “gut feel” – the people and the data don’t lie. So don’t doubt the results, just go with it. Yes, even if it contradicts the best practices that I have preached about. Sometimes, it’s the counter-intuitive results that are the winners.

What Next?

OK, so you’ve run a test for just long enough and you have the results. What do you do next? Well first of all, if there was a statistically significant “winner”, weave those changes into the permanent fabric of your website. And by permanent, I mean until the results of the next test come out.

Which brings me to another point – ITERATE! Do many A/B tests. Build upon the results of each test by testing out more and more elements until you get to a desired conversion rate. Each test may have a reasonably small impact on the conversion friendliness of your website – but the results of a sustained A/B testing campaign will bring in awesome long term conversion rate optimisation results.

About

Dan is an slightly mad and wildly offensive online marketing consultant. He manages campaigns for E-Web Marketing with an irreverent style that inexplicably nets fantastic results. He is obsessed with all things online and can frequently be found blogging about Conversion Rate Optimisation, SEO and whatever random ruminations go through his questionable head.

Leave a Reply