Review of Conversion Optimization Minidegree Program (Pt. 12)

Ivan Iñiguez
4 min readDec 5, 2021

If you’ve heard of A/B testing, you know it’s more complex than simply running a test for the sake of running it.

In fact, you may know that you need to have a good understanding about statistics in order to carry a good split test.

Not doing so will simply show you results you don’t want that can hurt your business (which is the opposite thing of what you want).

That’s why this week (the second to last week from this reviews) I will dive into some statistical analysis of A/B testing after going through 2 courses inside CXL Institute… which will set up everything for the last week.

Let’s begin.

Split testing is powered by data… we want winners, and a significant uplift from our current version (control).

Data leads to better business decisions, but…

It’s in the interpretation of that data that will help your business continue growing or fail flat.

That’s why it’s important to grasp the concept that “correlation does not imply causation”.

A good example is when you test something new and all of a sudden your conversions (sign ups/ sales) start decreasing.

Most people would say that because of that change (cause) and we say a decrease in conversions after we add it, then if we remove this change we’re going to get it back.

Even when it’s a valid hypothesis, it simply relies on observational data. This means, there are other variables (like a confounding variable) which could be playing a role in this decrease in conversions, yet…

we are misattributing it to this change we made.

To help you solve this type of errors, we need to use statistical models. This means that you can use a math description and a set of assumptions to see which change is accounting for the chance regularity of the data.

But keep in mind that A/B testing we can conclude things different to what is rally is.

We could decide that a variant you set up was a failure (when it’s actually a winner) and all he way around.

Moving on, you also have heard that for most cases, A/B tests should have a 95% statistical significance with an 80% statistical power.

However, what does these terms really mean?

Let’s look first at statistical power. (I will share about statistical significance in a moment).

This means as the probability of observing a p-value statistically significant at a certain threshold (alpha) if a true effect of a certain magnitude (u1) is in fact present.

Now, this usually brings the option of multivariate testing to the game.

What’s important to takeaway from it?

That power is significantly reduced if sample size is the same… which is the case in multivariate testing.

So to preserve power, you need to increase sample size.

This involves something called the Bonferroni correction, as well as Dunneett’s correction.

And what about concurrent tests?

Well, A/B tests are designed to solve the issue of attribution by eliciting causal links… so we don’t deny a conversion to an A/B test based on a user coming from a specific traffic source.

To completely solve the issues with concurrent tests, it’s to run one test after another.

Yes, it has it’s disadvantages as it’s extremely high cost of very slow testing, yet…

you want to get results that doesn’t come from a bad choice due to interference between variants from different tests.

But once you have some results, what do you do with them?

That’s where you first want to use percentage change to express a result.

Not because it sounds more a 3% uplift, but it can easily be translated into business results.

And that sums up this second to last review from CXL Institute.

Learning A/B testing is a huge advantage for business owners and optimizers.

But why just knowing the basics of split testing where you could know how to properly run tests?

By properly, I mean that you could know what a 95% statistical significance mean, or how to understand the errors we make when running concurrent tests (or multivariate tests).

Because there is more to it.

You see, in split testing we can have 3 different goals we could aim.

  1. For deployment — seeing whether something makes any change or not adding it. It doesn’t affect the main KPI you’re tracking.
  2. For research — know whether you have elements that could be removed, and what makes the work inside the website.
  3. For optimization — this is the most (and sometimes only) purpose we run split tests. We want to increase conversions and grow… and this will help you do this.

And there’s more we could cover inside A/B testing, but I wanted to give you a quick idea of some details involved in split testing that go beyond just thinking about doing a test.

Now, if you were to ask me how great these 2 courses about split testing were, I’d say…

they are definitely a masterpiece.

You will know how split testing can fit into your company, have a clear idea on when (and when NOT) you can use it. Plus, it gives you more confidence to not only interpret the results… it shows you what to do in the most common scenarios you’ll find.

You’ll also clearly understand how to use A/B testing inside your company and why you need to see this as a pillar strategy to the growth of a business.

If you want to eliminate all the questions you have around split testing, then you’re going to want to check CXL courses.

Even if you’re totally new and unexperienced, you’ll leave the course with a more detailed roadmap on how to use split testing…

This sums up this week’s review.

I have a lot of material to cover, yet… next week will be my last contribution about this journey with Conversion Optimization CXL Minidegree Program.

So I’ll compress 2 modules into the last article and give you my 2 cents on the whole Minidegree program.

See you next week, which is the last one.

Ivan

--

--

Ivan Iñiguez
0 Followers

A Direct Response marketer who happens to write copy. Emails, sales pages, Upsells, and VSLs.