Over a cup of coffee with my father and a young friend of his, Siddharth, I first heard about the concept of an A/B Test in 2018. He told that it is a simple idea that you show two versions of your website to customers parallelly. And then track which one is performing better over time. You can show the winning variation to customers once you are sure and drive better conversion rates. If an idea doesn't work against the status quo you can quickly reject the idea and move on.
I was young and fairly new in the tech industry, so the idea of showing two versions of your website at once was out of the box and deeply fascinating for me. 8 months later, purely out of serendipity I found myself working in a company that used to sell an A/B Testing platform. Since then I have been working in this newly emerging field of A/B Testing and have been fascinated by the depth and uncertainty it encompasses.
A/B Testing is the application of randomized control trials in the modern tech industry which has allowed for data collection at an unprecedented volume. The amount of data now available to run single a A/B test is much bigger than it ever used to be in the prior applications. It is now a popular priority for most successful tech companies and the field continues to grow and gather interest.
What is running an A/B Test like?
In the most basic form, there are four parts to running an A/B Test - Hypothesis, Implementation, Analysis and Inference. I explain these concepts in fair simplicity however each component has nuances to it that I will explain in later posts.
Hypothesis: You first start by creating a hypothesis to improve your website specifically towards a goal that you might have such as click rate or revenue per visitor. You create a hypotheses by studying the underlying dynamics of how customers react to your website.
Implementation: To implement an A/B Test, you usually need two components. A randomizer that automatically assigns control and variation to the visitors of the website. A data logger that records the desired target variable across both control and variation. You then run a test and parallelly record the samples from both versions of the website.
Analysis: To analyse the results you look at the mean of these samples after a sufficient sample size has been collected. You then run the test through a statistical significance calculator to check if the difference you observe is significant or not.
Inference: Next you take a business decision as to whether the improvement observed is worth making the change permanent. Based on the statistical significance you take the call whether to run the test again or not.
Finally, you iterate over this loop and try different hypotheses to improve the goal metric. Most of these steps are usually very hard to implement for a beginner and various vendors give you the underlying framework to run an A/B test.
This is the simplest A/B test that one can run. While it seems like there might not be much to talk further about A/B tests, as you dive deeper you realise that you have barely touched an ocean of uncertainty and complexity. What lies ahead is the complex beauty of experimentation and how to get it right.
Ronny Kohavi clearly states in his course on A/B Testing and Innovation, "Getting numbers is easy, getting numbers that you can trust is hard"
The way ahead on A/B tests
While it is difficult to summarise the directions in which the discussion on A/B tests diverges into but there are a few important components of these discussions that we will build upon as we move forward into this blog:
- Statistical Rigor: The central analysis engine of an A/B test that tells you if the difference is statistically significant or not is a very uncertain tool that can be used wrongly in much more ways than one can imagine. Hence, discussing the nuances and variations of core statistical engine of A/B tests will be a core focus of this blog.
- Experimentation Platform: The simple anatomy that you saw in the above section is like Tony Stark. The actual capabilities of a modern experimentation platform are more like Iron Man. You have barely seen the core, when you augment it with the right tools, modern experimentation is much more powerful than classical randomised control trials. Going forward I will explain these augmentations in detail.
- Business Perspective: The discussion on A/B tests usually meddles into various concepts of handling business metrics and correctly driving your experimentation efforts in the right direction. Often cultural components of a business play a big role in making a business experimentation driven. Hence, business will be another core pillar of this blog.
- Tips and tricks on correct experimentation: Finally, another core area of this blog will be to provide you with the right workable knowledge for experimentation. I will try and collect all meta-patterns that have been learned about experimentation by people in this field and try and share them with you.
I will try and first compile a central repository of the articles I can write on experimentation and then keep writing the repository, one day at a time. Please let me know if there is something else that you would like me to cover on experimentation