Implementing AB testing is a key element of any CRO strategy. However, there are many common pitfalls that can hinder your efforts and reduce the ROI of your actions. Poorly designed, poorly interpreted or unreliable tests, these various errors can slow down your conversion performance. In this article, we’ll explore the 10 most common mistakes you need to avoid to ensure your AB tests are successful and give your site the boost it needs to outperform the competition.
1° – Do AB testing with too little traffic
This is one of the first elements to check before embarking on AB testing: your traffic. Indeed, with too little traffic your tests will not be significant, the data collected does not allow the results to be interpreted with confidence.
The more data there is, the more likely the results are to be reliable (in the same way as a survey). This is why you must take into account the volume of the site, or the page concerned, seo services for ecommerce for safe results. There are online calculators that allow you to anticipate the duration of your test: enter your conversion rate, your daily sessions and the minimum progression envisaged and you obtain a minimum duration.
As you will have understood, AB testing is not suitable for low volume sites. Little tip for small traffic: test with significant modifications in order to have a strong impact even if it means not knowing exactly which element produced the results.
With around 100 visitors per day, and for an improvement of at least 10% in the conversion rate, the optimal duration recommended by this calculator (above) reaches 1568 days, or more than 4 years… This is why it is necessary to have a fairly large traffic in order to avoid eternal AB tests.
2° – Create a test without basing yourself on a hypothesis
Don’t shoot blindly!
Each test must start from a problem identified by data analysis. This analysis can be done by observing the performance and statistics of your site, by analyzing the behavior of your users (tests, surveys or even heat maps) or by checking the consistency of the site’s heuristic and ergonomic criteria. concerned.
Don’t hesitate to put yourself in the user’s shoes, step back and test the page or element in question with an outside perspective. It is necessary to formulate thoughtful and pragmatic hypotheses, thanks to this, you will be able to confirm or refute these hypotheses and no longer rely on chance.
3° – Do not prioritize your test hypotheses
It is necessary to have a test prioritization model in place before starting the implementation. Think about the impact of each of your test hypotheses and ask yourself the right questions.
- Is my test impactful, remarkable?
- Is it on a page far or close to the final conversion?
- Is it rather easy or rather difficult to set up?
The PIE method takes into account the potential, importance, and ease of implementation of the test. Depending on the score for these criteria, take the average of each of your tests and rank them in order of priority, from best score to worst. Not prioritizing your tests could waste a lot of time and resources.
4° – Launch tests without accepting them upstream
One step that should not be overlooked in the launch procedure: retesting (verification) of your test.
You are in a hurry to launch your book marketing services
AB test to confirm your hypothesis, ok, but don’t forget to check the implementation of your test before launching it. Check the implementation of your version B on different media and interact with it to verify its implementation. This can save you from wasting time or having to throw away a test because of a flawed analysis.
5° – Performing an AB Test over too short a period
The duration of an AB test, regardless of the traffic on your site and the conversion performance obtained, must last at least two weeks! It is risky to stop a test early, because the random distribution of traffic over a short period biases the analysis of your results.
Don’t forget that a conversion rate can suddenly change depending on the day of the week or seasonality, or the context of the moment (see error no. 9). In addition, we recommend that you do not believe too quickly the testing platform sometimes providing conclusions that are too hasty and without real reliability.
An example with this table (above) which shows +99% reliability after only 7 days of testing and, on top of that, an insufficient number of sessions. In any case, do not stop your test before at least 14 days of activity so as not to obtain biased results.
6° – Launch several tests simultaneously on the same audience or the same page
Another misstep is to launch too many tests simultaneously on the same audience. Not to be confused with the previous error, we are talking here about testing several pages targeting the same audience and the same conversion objective .
The main disadvantage is that you will find yourself facing a highly uncertain analysis . Indeed, how do you know if it was your first test or your second test that increased the conversion? By repeating AB tests, with a separate audience and indicators, you will avoid unreliability in the results.
7° – Modify the variant (version B) during the test
What better way to distort the results of a test?
Do not under any circumstances modify your variant during the test, this is the best way to completely bias the performance of the AB test. If you notice a change or error once your test is online, stop it and start a new test period with new data. Receiving (error no. 4) is precisely useful to avoid starting an AB test again, or worse, modifying the variant of a test in progress.
8° – Forget seasonality and other current marketing actions
Pay attention to particular periods and campaigns or other levers in progress!
It is important to take these factors into account during your testing periods. A seasonal shopping event, like Christmas, can greatly change user behavior. As well as potential ongoing campaigns for your brand (online acquisition, media). Think about it before launching a test, your results will be even more reliable.
9° – Consider inconclusive results as a failure
It is entirely possible that some of your tests will fail, rest assured, this is not a real failure, because the hypothesis you thought of has in any case been verified. You can therefore learn from these results and take them into account for your next tests.
Take advantage of results, even negative ones, and leverage the numbers to learn more about your users and to test another hypothesis. Test iteration, and therefore the validation or not of hypotheses, is the best way to optimize your site and improve its conversion rate.
10° – Neglecting “small” performance gains
“Only 2% more conversion for version B? It’s too weak, we’re not going to implement it.”
Make no mistake, an increase, even a very slight one, is still an increase. And as long as reliability is verified, even small gains can be greatly beneficial. Remember that optimization lasts over time, and is added up for each test carried out! By chaining together AB tests. The accumulation of small optimizations has a big impact.
11° – [Bonus] Do not test constantly
Every day without a test is a day wasted!
Testing allows you to learn more about your site’s performance, user journeys, and what’s working and what’s not, all on an ongoing basis. It is not a question of testing an element at random, but of verifying one by one each of the hypotheses which could make your site more efficient, continuous optimization. Remember that, according to a recent barometer, 75% of players with traffic of more than 1 million monthly visitors do A/B testing.
The implementation of AB test is to date the most effective CRO lever, it would be a shame not to know how to use it in the best possible way to improve the conversion of your website. From the testing hypothesis to performance analysis, including prioritization and retesting, each step is important. Ready, set, test!