Don’t A/B test tiny changes (unless you’re huuuuuge)

Case studies of A/B tests where a website changed their order button color from green to orange and saw a 30% increase in revenue have been popular among the web.

A huge improvement as a result of a tiny change.

And while this sometimes does happen in the real world, it’s a rare exception. But it makes for a fascinating story—so people love to read and re-tell this story. Kind of like someone winning millions in the lottery: Yeah, it happens, but no one would argue that playing the lottery is a feasible way of becoming a millionaire.

GrooveHQ for example wrote about 6 A/B tests they conducted that did absolutely nothing for them.

Now the truth is that most A/B tests won’t yield clear wins. If you decide to start running A/B tests, you need to make a long-term commitment and realize that a lot of A/B tests will be inconclusive.

I’ve conducted my own fair share of meaningless A/B tests.

Do you know what happened next?

Other people on the team started questioning whether it was worth running A/B tests at all.

Why keep doing these? What have we learned from them? If running these A/B tests don’t yield any meaningful insights, why are we wasting time with them?

You don’t want to get in that place.

So the first take-away is to test big things that will most likely make a real difference.

Err on the side of testing the extreme version of your hypothesis. Subtle changes don’t usually reach significance as quickly as bigger changes. For the A/B test you want to know if your hypothesis is correct. Once you know that, you can fine tune the implementation of the hypothesis.

– Kyle Rush, What Do You Do With Inconclusive A/B Test Results?

What’s more, to put yourself in a position where you won’t have to deal with a lot of inconclusive A/B tests, Alex Birkett recommends implementing a research and prioritization framework for testing.

 

Leave a Reply

Your email address will not be published.