I have been hearing this term A/B testing at work often and was curious to learn what it is, how it is different with our Controlled experiments in social sciences. A/B testing is simply put – testing simultaneously two versions of a product – version A & version B with user group to identify which is the most effective version. This is popular recently with the internet explosion, people are trying to use A/B testing to identify how their online, web presence should be designed.
A simple example for this is – take the case of designing an experiment where I am designing how many steps are involved in selecting a product in cart to final checkout stage. We can create two versions – Version A with 2 screen series, Version B with 3 screen series but there is a choice of selecting some related products to cart during payment. So here we would test - which version converts visitors to buying the products or increasing the items they finally end up buying more number of products in the site. So you are testing the two versions simultaneously and not one after the other, with different user groups and evaluate which version helps in reaching your goal.
This area is getting more popular with website optimization where you get live feedback loop from users behaviors to better improve your site and achieve your targeted goals. I find this fascinating, because most of the conventional wisdom we have when we design a product will get quashed when we have real user behavior data. So when we do A/B testing we need to reject our default judgments and look for intuitive or sometime non-intuitive info on the behavior data. That is when we leverage the power of experiments.
To understand the power of experiments, in one of the recent courses (MOOC) I am in , read about the power of experiments on a simple case of Organ donation. If we look at the chart of people who have voluntarily signed up for organ donation there is a wide spectrum from as low in the 20% to high in the 90%+. The conventional reason we would jump to is the awareness level of the program, marketing, availability of the program etc. But by experiments, if you look at the final result, it gives a totally different reason unexpected. The way the form is designed changes this a lot. Yes, if it is an opt in form, people tend to ignore this, but if it is opt-out version where you check only if you do not want to sign up for the organ donation - most people chose not to check that. So we tend to take the path of least resistance. This is an example result from experimentation. I am sure A/B testing is becoming more and more popular with top online companies now like Amazon, Google,Bing, Facebook .
You can read more about the suggested pricing scenarios where we behave differently for A/B Testing in this article. This is getting more fascinating as we get in to choice modeling and choice architecture.