You need to know these terms in order to talk about conversion rate optimization in a meaningful way.
The visitor's action that you intend to improve with the campaign (e.g., signing up for a webinar, adding a product to the cart, etc.) You need to determine your conversion actions. Clearly define what you are testing, your target and the most important statistic in evaluating your results.
The page in the test that is not part of the test. In conversion tests, the control is the page that is currently converting better. Each new variation is tested against the control.
So in an A/B test, A is the control. The version of your test, or variation (see above) is therefore B.
The target page on which you performed the test. For example, the variation page should be more advanced compared to the control page.
Insider's tip: name your variants in the test to make it easier to identify each key element.
Here's what it should look like:
- Control - full form
- Variant 1 - shortened form
- Variant 2 - email only
- Variant 3 - form + analysis
This is data that can be quantified numerically. The "objects of calculation", such as :
- Unique visits
- Amount of the order
This is descriptive data. People's opinions" are difficult to study, but most of the time lead to a circumstance that allows you to obtain quantitative data. This includes:
- Activity Mapping
- Session recording
- Analysis forms
The life of an optimizer is all about numbers. So what are the most important numbers when you are doing a test?
The conversion rate is calculated by dividing the number of conversions (without choice exceptions) by the total number of visitors on your test page.
The rate of change between two variants (it is not the difference between two numbers). To calculate the percentage increase, here is the formula to use: In this case, if the difference is 1%, the rate of increase is 10%.
Technically, it refers to: "The percentage of instances in which a set of identically established tests will capture the concrete (accurate) mean of the system under test within a stated range of values around the accurate value measure of each test."
Simply put, you are trying to prevent false positives. Therefore, the confidence percentage shows how confident you can be that your test is accurate.
For example, let's say your confidence percentage is 95%. This proves that if you were to run your campaign 100 times, the 95 tests you ran would result in decisive variations.
It is a common mistake to interpret this as a "chance" when you get the same results. As if the 95% confidence level means that you have a 95% chance of getting the same results with another test.
Here, there is no room for chance. We calculate the accuracy. You may notice small dissimilarities in each test. The percentage confidence means that you can see the difference, but not the degree of difference.
"Conversion rate" is a misnomer. It gives you the impression that your tests are giving you an exact number that you can call a "conversion rate".
In practical terms, be prepared to get conversions in a range that is not at all accurate. For a range of 30.86% to 36.38%, you will get an average of 33.59%.
Be aware that the two tests overlap slightly. Your goal is to break the overlap so that you get a decisive variant.