What do you mean by A/B Testing

By // No comments:

I have been hearing this term A/B testing at work often and was curious to learn what it is, how it is different with our Controlled experiments in social sciences. A/B testing is simply put – testing simultaneously two versions of a product – version A & version B with user group to identify which is the most effective version. This is popular recently with the internet explosion, people are trying to use A/B testing to identify how their online, web presence should be designed.

A simple example for this is – take the case of designing an experiment where I am designing how many steps are involved in selecting a product in cart to final checkout stage. We can create two versions – Version A with 2 screen series, Version B with 3 screen series but there is a choice of selecting some related products to cart during payment. So here we would test - which version converts visitors to buying the products or increasing the items they finally end up buying more number of products in the site. So you are testing the two versions simultaneously and not one after the other, with different user groups and evaluate which version helps in reaching your goal.

This area is getting more popular with website optimization where you get live feedback loop from users behaviors to better improve your site and achieve your targeted goals. I find this fascinating, because most of the conventional wisdom we have when we design a product will get quashed when we have real user behavior data. So when we do A/B testing we need to reject our default judgments and look for intuitive or sometime non-intuitive info on the  behavior data. That is when we leverage the power of experiments.

To understand the power of experiments, in one of the recent courses (MOOC) I am in , read about the power of experiments on a simple case of Organ donation.  If we look at the chart of people who have voluntarily signed up for organ donation there is a wide spectrum from as low in the 20% to high in the 90%+.  The conventional reason we would jump to is the awareness level of the program, marketing, availability of the program etc. But by experiments, if you look at the final result, it gives a totally different reason unexpected.  The way the form is designed changes this a lot. Yes, if it is an opt in form, people tend to ignore this, but if it is opt-out version where you check only if you do not want to sign up for the organ donation - most  people chose not to check that. So we tend to take the path of least resistance. This is an example result from experimentation. I am sure A/B testing is becoming more and more popular with top online companies now like Amazon, Google,Bing, Facebook .

You can read more about the suggested pricing scenarios where we behave differently for A/B Testing in this article.  This is getting more fascinating as we get in to choice modeling and choice architecture.


By // No comments:

Crowdsourcing is getting more and more traction with the penetration of internet, as we have more and more people staying connected. There is now a plenty of ideas mushrooming focusing on this. I had already posted sometime back “How to make money money by doing small errands – Service Networking”. With services like Task Rabbit, Amazon’s Mechanical or M-Turk or click worker people can use their expertise to help other people across the world. With the same intention Google has also launched a new service called Helpout which leverages its hangout platform. I am not sure how Google is going to scale this, but the intent of the service is focusing on Google’s mantra of know everything in the world through Google services and platform. So in Helpout, you get real help from real people. They screen folks, before making them helpers in their Helpout platform.

This crowd sourcing is an area, slowly getting crowded, as big players like Amazon, Google getting into this space. There are many areas which could leverage this Crowdsourcing platform. More to come, as I research this area.