Planning your A/B testing

When you create an A/B test, you must know how long you must test your results. Although you cannot know how long the process will take, you can use several attributes and thresholds to determine the running timeline of the test.


This page contains information only for A/B testing. Although you can use some of the information on the page for other tests, the estimates and guidelines the page provides may not be accurate.

Collecting website data

As an example, you want to test to see if a different banner design on your homepage improves conversion compared to the existing banner design. Before you create your test, examine your website’s analytics to retrieve the following values for an average day:

  • Website visitors to your homepage

  • Website visitors clicking the existing banner

For our example, if (on average) your website has 1,000 visitors and 50 visitors click the banner, you have a 5% conversion rate.

To determine if the new banner has better results, you must figure out an appropriate threshold for success. Is the new banner better than the existing one if conversion increases to 6%, or would the banner succeed if conversion exceeded 10%? Raising conversion to 6% represents a 20% lift or increase in Average Differential Return (indicated by the symbol α - alpha), although raising conversion to 10% represents a 100% lift.

Calculating required website visitors

After you collect your website’s statistics, use an online calculator to help you determine the number of visitors needed to ensure your A/B test results are statistically significant. Based on the previous example, to see a 20% lift (6% conversion increase), your website requires around 7,663 visitors per variation (or 15,326 total visitors for the two banners). At a rate of around 1,000 visitors per day, you can expect the testing to take two weeks until you confirm you have met the success threshold or not.

Although you can stop your A/B test before the required number of website visitors view your banners, doing so can affect the reliability of your results. Ensuring your results are statistically significant indicates the test results are real values and not a random effect.

The most common number to use for statistical significance is 95%, which is also the default in the linked online calculator. An α (alpha) of 5% represents the 95% statistical significance.


The α (alpha) of 5% is the inverse of the 95% significance in the Personalization interface, but represents the same data.

The other percentage on the calculator page is the statistical power we want to have with the test, which indicates the percentage of time you detect the minimum effect size, assuming it exists. In the calculation, the minimum effect we want to find is the change from a 5% conversion to a 6% conversion. You can change the percentage to have more statistical power or more statistical significance if you desire.