Improving on the cheap: How A/B testing makes and saves you money - Heart Internet Blog - Focusing on all aspects of the web

You have a problem: submission rates are down for the third month in a row. You’re getting increased traffic on the site but no one is signing up to hear more about the product you’re selling. If this continues for a few more months, you’re going to have a real problem. Something has to change.

If you’ve ever been faced with this problem directly, or had your boss come to you and tell you to solve this problem, finding a solution may seem like a shot in the dark.

Should you focus on fixing the user experience on the site? If so, what should you fix? Is there an error somewhere in the customer journey that’s causing a problem you can’t see? Are people not interested in how the product is presented?

A/B testing, or split testing, can help you find concrete answers to these questions and make your business better.

A/B testing is a way to test two different versions of your site quickly to decide what layout, design, copy, and other details perform better with your audience.

You can use A/B tests to make your site convert better, or test a potential change before putting it into production, often saving money by learning that a change you were going to make doesn’t resonate with your audience before you’ve spent a bunch of money to fully implement it.

With iterative A/B testing, my team has made our clients millions of dollars with a single test, and also saved them millions by testing potential changes to their site that ultimately tanked.

There’s no better tool for businesses to quickly find out what works and what doesn’t. But how do you get started?

What’s your goal?

First, identify your business goal. Testing without having a goal you’re working toward won’t help you optimise your site, and will instead pull you in so many different directions that you won’t be able to focus and make real improvements.

So what is your goal? What main key performance indicators (KPIs) or metrics are you trying to improve?

If we’re trying to get more people to sign up for a product by completing a form, our main KPI is form submission. This is the main thing we want to focus on increasing, so all of our tests should include this as the main metric to measure.

What problem are you trying to solve?

Next, you want to think about what problems keep coming up after people land on your site. What’s making submission difficult for them? You need to do this so that your testing area is focused. Otherwise you’ll just test everything at once and have no idea what’s working or not working.

If you’ve already got an analytics platform like Google Analytics, take a look at what pages have the highest bounce rate, or what step in your conversion funnel seems to have the most drop off.

If you don’t have analytics on your site, it’s time to reach out and do a bit of guerilla user research. Find a community (Reddit, LinkedIn, Twitter), and ask people to go through the act of submitting a form.

If you’re going down this route, remember to ask these two key questions as they’re going through the site:

  • What do you expect to happen next when you do X [click this button, fill out this form, etc.]
    • Then have them complete the action
  • Did what happen meet your expectations? Why?

Anecdotal evidence or light analytical data is a great place to at least begin identifying problems. And as you go, make sure to write your problems down so you can come back to them later as you start doing more split testing.


Analytics dashboard that shows traffic, total users, and time on site (credit: Carlos Muza on Unsplash)

Get set up to test

So now you should have a goal:

  • Increase form submissions on your site

And you should have some problems you’ve identified that are focused around your goal. Here are some possible examples:

  • The form people have to fill out has too many steps and feels complicated
    • Guerilla research finding
  • People aren’t actually reading all of the information on the form before they close it
    • Combination of guerilla research finding and site analytics
  • Bounce rate on certain pages is very high so people aren’t even seeing the form
    • Site analytics

Analytics

Next you need to set up the tools so that you can actually test the solution to the problem you’ve identified.

Google Analytics is a great analytics platform to start with that will show you the main things you want to know about your site, like total traffic, traffic segments, site conversions, bounce rate and a lot of other data that will help you determine gaps in your site.

But why do I need analytics?

If it’s not tracked, it can’t be measured. If it can’t be measured, it cannot be improved. That’s the main reason you need to have analytics on your site. You will never be able to speak with all of the people that come to your site and leave without signing up. But you will be able to see when they came, how long they stayed, where they went, and where they left. Knowing this gives you the information you need to make better changes on your site.

Testing platform

Next, you’ll need a place to actually run the tests you’re thinking about creating. Google Optimize is a great free solution. For paid solutions you can also use ClickFunnels, Monetate, or Optimizely. These can vary in price quite a bit, so if you’re just starting out, I’d suggest going with the free version first.

Picking a test

Now that you’re all set up with an analytics and a testing platform, you’ll need to decide what alternative experience you want to design and code so that you can test it against your current site.

We had a list of problems that we originally thought sounded interesting:

  • The form people have to fill out has too many steps and feels complicated
  • People aren’t actually reading all of the information on the form before they close it
  • Bounce rate on certain pages is very high so people aren’t even seeing the form

The easiest way to go forward from here is simply to pick one of the problems you’ve found and start creating a solution. Let’s use this one:

  • The form people have to fill out has too many steps and feels complicated

We can create a new form that should be easier for people to complete, and will get them through the submission process faster.

But what if I still don’t know what to test?

If you’ve gotten this far and you’re thinking, “Okay, but I really want to get started testing now, and I don’t have an answer to all of these questions,” then there are a few tests that tend to work well on landing page sites.

  • Change button placement and colour
    • Try moving the button above the fold, or nearer to the copy that is inciting the action
    • Colours that are bright and jump off the page tend to do better, because they catch the eye
  • Change copy
    • Rewrite your copy and make it more compelling
  • Move copy
    • Try moving the most important information above the fold
  • Remove copy
    • Or shorten the amount of information that people have to read through before they decide on completing an action

You can start with these as intro tests to learn how the platform works and see how your tests perform, while doing your goal setting and problem solving as they run.

Things to think about before you test

Winning and losing

It’s important to realise though, testing is not one-size-fits-all, and some (or all) of these tests might not “win” on your site. When a test wins, that means it increased the KPI (form submissions) that you identified earlier. When it “loses” or is “flat”, that means your KPI went down or stayed roughly the same.

So why don’t all tests that improve copy or UX win? Well, what works on one site may do terribly on another because the audience demographic or company type is different. Another reason is that the thing you’re testing might not actually be a problem for your visitors. If you’re not solving real problems for the visitors on your site, you will rarely find tests that win or improve your business in the long run.

But a test that loses or is flat is also a good thing. You can learn just as much, and sometimes more, from a test that lost as you can from tests that win.

How? When a test loses or comes back flat, you’re immediately crossing off potential problem areas on your site that you now know are not problems — because you’ve tested it! That means you can move your focus to other areas, elements, or interaction on the site and continue to hone in on what’s really standing in the way of helping your visitors convert.

Testing and audience size

In order to do very focused and pinpointed tests, you’ll need to have quite a bit of traffic (50,000+ unique visitors per month).

If you don’t have a lot of traffic on your site you’ll want to take the “biggest swing” possible by changing several things at once. So, if you have three problems that you’ve identified, go ahead and solve for all of them and run it as a test.

You won’t know exactly which change specifically is working, but you will know which experience won, and that it’s focused around the goal you’re trying to reach.

Running your test

Alright, so let’s say we’re running a test that’s going to have two variations.

First we have variation A, which is the current form on the site.

Then we have variation B, which is our new, improved, shorter, sleeker form.

So now you have the test designed and coded, you have the platform to run it on, and the analytics in place to track your site. You’re ready to press the big green button. You are ready for launch.

But once it’s out there, how long should you let it run for? One week? One month? One year?? Forever??? How will you know when your test is done running?

This is where you’ll want something else to do the hard work for you. An A/B testing calculator will tell you how many visitors you need to run through your tests before you’ll know which test won or lost.

Here’s a good A/B testing calculator to start with. You’ll need some info before you begin:

Baseline Conversion Rate: How many people are converting on your site now?

Conversion rate is the number of conversions (form submissions, or whatever main metric you’re testing) divided by the total number of visitors.

Minimum Detectable Effect (MDE): How big or small of a lift (change from your baseline) do you want to be able to detect? You’ll need less traffic to detect big changes and more traffic to detect smaller changes.

Statistical Significance / Confidence: How sure do you want to be that the change in conversion rate is not due to chance? Start with 95% significance and work backward from there, based on your tolerance for risk. I’d suggest never going lower than 80%.

After inputting this information, you’ll see the sample size of visitors that will need to run through each variation before you can call the test a winner or a loser.

From there you can extrapolate how many days you think the test will need to run.

For example, here’s the calculator in action:


Credit: www.optimizely.com

Estimating your test duration

Now, it’s time for maths! Let’s say I have, on average, 180 visitors per day. That means I’m getting roughly 5,400 people on my site per month.

When I run the A/B test, this is how my traffic split will look:

Variation A: 50% of total traffic

Variation B: 50% of total traffic

This means that, per month, each variation of my test will see about 2,700 visitors per month.

So to reach my sample size per variation, I’d have to run my test for almost 10 months.

Whoa. That’s a long time! In fact, it’s too long. I want to make these informed changes to my site faster.

That means I have to decide: what risks am I willing to accept?

In this next instance, I’ve made some tough decisions.

I’ve decided to increase the MDE, meaning that I’ll need to make sure my test provides quite a large change so the test can tell if the new variation is actually winning.

Additionally, I’ve lowered the confidence quite a bit — to 89%. This means that no matter which variation wins, I will be less confident than before that it could win again in the same scenario.

However, this gets me down to a much more reasonable three-month testing duration.

Do not cut a test short because it’s reached statistical significance. If you turn off a test after a week because it’s reached statistical significance, you’re deciding that a test won or lost without having enough data. This means that you are extremely likely to make a change on your site that may be detrimental.

At the very least, every single test should run for a minimum of four weeks.

Understanding the results

Now that you’re in test mode and know how long you should let it run, it’s a waiting game. Platforms like Google Optimize and Optimizely will declare a winner for you and report out the findings so you know which variation won.

As you get further into testing, there is more nuance and subtlety to how these statistics work, and you’ll start seeing phrases like “confidence intervals” and “primary KPI lifts” — but don’t worry about it just yet. However, if you’re looking to do a deep dive on this, check out this post.

But the best course of action if you’re just starting to test is — just get started!

I’ve run a test, now what?

So you’ve run your first test, found a winner with Variation B — what should you do next? Well, first of all, implement that new form you tested and start enjoying your improved conversion rate.

After that, it’s back to the drawing board. Take a look at the other problems you’ve identified, create a new solution, and start testing.

As you test more on the site, you will hopefully start seeing a decrease in bounce rate and an increase in your primary metric.

Feel free to jump around and start identifying other goals you want to meet and problems on your site. The best way to optimise your site is by creating tests focused on solving a problem for visitors while working towards your goal — so keep hunting for problems. It’s all in the iteration.

Subscribe to our monthly Heart Internet newsletter, filled with the latest articles about web design, development, building your business, and exclusive offers.

Subscribe now!

Comments

Please remember that all comments are moderated and any links you paste in your comment will remain as plain text. If your comment looks like spam it will be deleted. We're looking forward to answering your questions and hearing your comments and opinions!

Leave a reply

Comments are closed.

Drop us a line 0330 660 0255 or email sales@heartinternet.uk