photo-1456406644174-8ddd4cd52a06

Mistakes are things that get you closer to the right direction. When I first began as an intern at Oursky, I hadn’t even heard of A/B testing, but managed to get a 37% conversion rate on my third A/B test. In this post, I’ll share the three main mistakes and my learnings from that experience so you can avoid the pitfalls!

 

Mistake #1: I did not set a hypothesis in trial 1

Test result screenshot

Having two challengers means it takes a long time to reach significance level.

A hypothesis is product’s guiding compass. It is your rough map to treasure island. Having a clear hypothesis gives you a clearer direction on what you are testing for, measuring, and analyzing.

I started with two variations to test. I wanted to optimize conversion rates. Having two tests essentially changed my question from: ‘Is the old or new version better?’ to ‘Is either new version A or new version B better than the old version, and if yes, which one is better?’

I made my problem more complicated than it needed to be. The result is not only wasted time creating two versions, but also a lot of extra, unnecessary, work collecting and analyzing the data. In the end, I didn’t even know what I was analyzing, and how it could help me get a better direction.

Learnings:

  1. Have a clear thought process:
    • Deciding the question you have or assumption you want to prove/disprove
    • Figuring out how to measure it effectively
    • Setting up an environment to measure it
    • Gathering the data in a set time frame
    • Analyzing it
    • Repeat.
  2. Set a short time-frame to quickly collect data and move on. If a new feature is taking too long to reach a significance level, then it’s a ‘no’. The improvement is too minor.

Even though it’s not necessary to have a hypothesis when brainstorming a test idea, it becomes necessary once testing begins and you are tracking the experiments. If a hypothesis is not clearly rejected, then you can continue to make small changes to improve conversion.

Mistake #2: I did not set a target sample size

Shotbot landing page screenshot A/B test Img

I merely change the text here.

I was unaware of the sample size needed for the test. As a result, I did not know when to close it and draw a conclusion. The sample size is like the size of a window you want to look out of to get a clear picture – different views for different things. Usually, however, the larger the sample size, the more statistically significant it is because more people are confirming the same thing.

I’ll give you an example. Imagine you have two sets of test data in hand, both giving the result that product A is 20% better than Product B in terms of conversion. One uses a sample size of 10 people while another has the sample size of 100,000. Which test would you be more confident in when saying ‘most people’ prefer this?

You probably felt that 10 samples are less trustworthy. This is the concept of statistical significance. For a test result to be statistically significant, it has to reach a certain amount of sample size, this ensures this is not a result of random errors or biased samples.

Before starting each test, the sample size for observation should be set beforehand, and from that, we can set a timeframe for the test. During the first trial, I did not keep this in mind. As a result, I had no clue when to stop the test. Therefore, the result I got from the test is not objective or effective.   

Learnings:

  1. Use a sample size calculator to estimate how many visitors you need per day to reach your significance level. It’s important to choose a timeframe that is realistic, and a confidence target that is meaningful.
  2. One question, one test. It’s better to iterate faster with a smaller sample size than to wait for a large enough sample size with two tests at the same time.
  3. If you are into statistics, this Post by Minitab will guide you through the concept of Statistical Significance.

Mistake #3: I made small changes early-on in Trial 1

In my first trial, I made many small changes. Small changes have small impacts on conversion, therefore take a while to become statistically significant. The problem was compounded by my two simultaneous tests.

In my second trial, I provided extensive information on functionality, but it didn’t improve the conversion. However, I found out that explanatory text was an insignificant thing for our visitors. I thought we were just missing a detail, but I also found out that a video and an animated GIF were not interesting to visitors.

In the end, what worked was also a text change: a user testimonial. However, there was a big change in the fundamental my understanding behind it: that users wanted to know what other users thought.

Learnings:

Think big changes. Remove your seemingly promising video, changing photos to GIFs. They may look like big changes, but if the idea behind them is similar to your original idea, then it’s just a repackaged thing. Instead, think about a shift in perspective.

Mistake #4: Thinking I had the perfect test

When I’d poured over the data and spent hours thinking about the problem, of course I felt whatever solution I had was the best idea possible. I not only didn’t have a perfect first A/B test, I made two significant errors that should have been setbacks for conversion. In the end, I still got my 37% conversion rate within a reasonable amount of time for my first A/B test. In other words, I got a bit lucky.

Learning:

I learned a lot about improving the process, but it still always feels like winning a jackpot when I hit a conversion target. When I think about it, that’s probably true: I was just lucky I systematically ‘discovered’ the answer. Luck will always be a factor.

Never lose hope that you could end up in the right place, even if you went in the wrong direction!

That’s it! If you want to know exactly what I did for my first A/B test, check out my post here.


By the way, we do weekly posts on product development. Subscribe to us so you get your product out sooner!

Have any other user interview tips you’d like to share? Leave us a comment below.

If you find this post interesting, subscribe to our newsletter to get notified about our future posts!

 

Written by:

Dennis the intern. Doing all sorts of growth hacking, content marketing and data driven goodies at Oursky. He loves cats too.

Find him at dennistam@oursky.com|Linkedin|Twitter