Chapter 1- AB testing book

Prereq for A/B testing

Please ensure wole team has reasonably good understanding of these topics

  • Randomization
  • Sample
  • Population
  • Control and treatment
  • P-values
  • Confidence interval
  • Statistical significance
  • Practical significance

  • A/B testing steps

    The article will provide a step-by-step walkthrough on all steps required for ab testing

  • Designing the experimentation
  • Dunning the experimentation
  • Getting data
  • Rnterpreting the results
  • Using results to decision-making and creating impact

  • Design Experiment

    Before you start doing ab testing or run experiment, ascertain your hypothesis, a practical significance boundary, and a few metrics. Ensure these are disucssed and reviewed.Here are some tips for designing an A/B test:

  • Define your goal. What are you trying to achieve with your A/B test? Do you want to increase conversions, reduce bounce rate, or something else?
  • Choose the right metrics. Once you know your goal, you need to choose the metrics you will use to measure success. These metrics should be directly related to your goal.
  • Choose the right variables to test. There are many different variables you can test, but it is important to choose the ones that are most likely to have a significant impact on your results.
  • Create two variants. You need to create two variants of your page or email, one that is the control and one that is the test. The control should be the current version of your page or email, and the test should be the version you are trying to improve.
  • Determine how to split your traffic. Once you have created your two variants, you need to split your traffic between them. This can be done randomly or using a pre-defined algorithm.
  • Determine how long you need to run the test . A/B tests need to run for long enough to collect enough data to get statistically significant results. This can take anywhere from a few days to a few weeks, depending on your traffic volume.
  • We should check the randomization of the sample that we will use for the control and treatment. We also should pay attention to how large the sample is to be used for running the experimentation. If we are concerned about detecting a small change or being more confident about the conclusion, we have to consider using more samples and a lower p-value threshold to get a more accurate result. However, If we are no longer care about small changes, we could reduce the sample to detect the practical significance.

    After you run experiment, plan how will you do following

  • Analyze the results. Once the test is complete, you need to analyze the results to see which variant performed better. You can use a variety of statistical tests to do this.
  • Implement the winning variant. Once you have identified the winning variant, you need to implement it on your live site or email.
  • Continue to test. A/B testing is an ongoing process. Once you have implemented the winning variant, you need to continue to test to see if there are other ways to improve your results.

  • Data collection for ab testing

    There are a few different ways to collect data for A/B testing. The most common way is to use a tool like Google Analytics or Optimizely. These tools allow you to track the number of visitors to your site, the pages they visit, and the actions they take. You can then use this data to compare the performance of different versions of your site or email.

    Another way to collect data for A/B testing is to use surveys or interviews. This can be a good way to get feedback on specific elements of your site or email, such as the design, the content, or the call to action.

    Once you have collected data, you need to analyze it to see which variant performed better. You can use a variety of statistical tests to do this. The most common test is the A/B test, which compares the conversion rates of two variants.

    Once you have identified the winning variant, you need to implement it on your live site or email. Then, you can continue to test to see if there are other ways to improve your results.

    Here are some tips for quantitity and quality of collecting data for A/B testing:

    Choose the right metrics to track. Not all metrics are created equal. When you're running an A/B test, you need to choose metrics that are directly related to your goal. For example, if you're trying to increase sales, you might track the number of purchases or the average order value.

  • Collect enough data. You need to collect enough data to get statistically significant results. This means that you need to have a large enough sample size and that you need to run the test for long enough.
  • Avoid bias. It's important to avoid bias when collecting data. This means that you need to make sure that the data you collect is representative of your target audience.
  • Test multiple variants. It's a good idea to test multiple variants at the same time. This will allow you to compare the performance of different versions of your site or email.
  • Analyze your results. Once you have collected data, you need to analyze it to see which variant performed better. You can use a variety of statistical tests to do this.
  • Implement the winning variant. Once you have identified the winning variant, you need to implement it on your live site or email.
  • Continue to test. A/B testing is an ongoing process. Once you have implemented the winning variant, you need to continue to test to see if there are other ways to improve your results.

  • Precautions for A/B experiment

    Make sure your test is statistically significant. This means that you have enough data to be confident that the results of your test are not due to chance.

  • Avoid confounding variables. These are variables that could affect the results of your test, but are not related to the changes you are testing. For example, if you are testing a new headline, make sure that you don't also change the body copy or the call to action at the same time.
  • Test for multiple variants. This will allow you to compare the performance of different versions of your site or email.
  • Analyze your results carefully. Make sure that you understand the results of your test before making any changes to your site or email.
  • Be patient. A/B testing takes time to be effective. Don't expect to see results overnight.
  • Here are some additional tips for running a successful A/B test:

  • Plan your test carefully. Before you start, take the time to think about what you want to test and how you will measure success.
  • Use a reliable A/B testing tool. There are many different A/B testing tools available, so choose one that is right for you.
  • Set a budget and timeline. A/B testing can be time-consuming and expensive, so make sure you have a plan in place before you start.
  • Get buy-in from stakeholders. Make sure that everyone involved in your business understands the importance of A/B testing and is on board with the process.
  • Track your results. Once your test is complete, be sure to track your results and measure your success.
  • Iterate and improve. A/B testing is an ongoing process. Once you have identified a winning variant, continue to test to see if you can improve your results even further.


  • Interpreting A/B testing results

  • Make sure your test is statistically significant. This means that you have enough data to be confident that the results of your test are not due to chance. You can use a variety of statistical tests to determine whether your results are statistically significant.
  • Avoid confounding variables. These are variables that could affect the results of your test, but are not related to the changes you are testing. For example, if you are testing a new headline, make sure that you don't also change the body copy or the call to action at the same time.
  • Test for multiple variants. This will allow you to compare the performance of different versions of your site or email.
  • Analyze your results carefully. Make sure that you understand the results of your test before making any changes to your site or email.
  • Be patient. A/B testing takes time to be effective. Don't expect to see results overnight.
  • Here are some additional tips for interpreting A/B testing results correctly:

  • Consider your target audience. When interpreting A/B testing results, it's important to consider your target audience. What are their needs and wants? What are their pain points? What are their motivations? Understanding your target audience will help you to interpret the results of your A/B tests and make informed decisions about your website or email.
  • Look at the big picture. Don't get too caught up in the details of your A/B testing results. Instead, focus on the big picture. Are you seeing an overall improvement in your results? If so, then you're on the right track.
  • Be willing to make changes. If your A/B testing results show that you need to make changes to your website or email, be willing to make them. Don't be afraid to experiment and try new things. The only way to improve your results is to keep testing and learning.
  • Iterate and improve. A/B testing is an ongoing process. Once you have identified a winning variant, continue to test to see if you can improve your results even further.

  • a/b testing - Statistical or practical significance threshold

    In A/B testing, the statistical or practical significance threshold is the minimum amount of improvement that you need to see in your results before you can be confident that the change you made is actually making a difference.

    The statistical significance threshold is typically set at 0.05, which means that you need to be 95% confident that the change you made is not due to chance. The practical significance threshold is typically set at a higher level, such as 0.10 or 0.15, which means that you need to be more confident that the change you made is actually making a difference.

    The choice of statistical or practical significance threshold will depend on a number of factors, such as the cost of making the change, the potential impact of the change, and the level of risk you are willing to take.

    It is important to note that the statistical significance threshold is just a starting point. You may need to adjust the threshold based on other factors, such as the amount of data you have collected and the confidence level you are comfortable with.