Understanding A/B Testing: The Basics

Why do some online shops instantly convince us to click ‘buy’ while others make us bounce in seconds. Of course, quality brands help – but most things online now have multiple alternatives. So, it often comes down to clever design and simple conversions. A/B testing is a smart way for businesses and creators to try out different elements, designs, and features to see what appeals most to their audience.
This quick process compares two versions of a website, email, ad copy or landing page that are likely identical in every way except for one key feature. For example, if you want to test different headlines on your homepage and see which attracts more clicks, you can fairly randomly display one headline to half your visitors (A) and another variant (B) to the remaining half. You can then measure the number of clicks each version got over time and choose the one that worked better for your business goals.
There are hundreds of things you can split test - from a single call-to-action button on your page, web design elements like colour schemes and menu bars, email subject lines and copy content, product placement or even different images on your site. And there aren’t any limits. You can rarely make as many incremental changes as you want until you’re satisfied with the results. A/B tests are fairly common now, but not very many businesses use them right despite how easily they can transform conversions.
In my experience working with high-growth brands at my creative agency, ongoing A/B testing almost always ensures increasing revenue every quarter. There’s no reason not to do it right (if you have the time).
Identifying Key Metrics for Success

Ever wondered how online stores decide whether a change to their website is a big hit or just a miss. If you think it's magic, you’re not alone. But online fashion retailers know that understanding what makes customers click ‘add to cart’ and eventually buy is all about tracking the right numbers.
From what I can tell, everyone talks about conversion rates like it’s the only thing that matters in the world - that number representing the percentage of site visitors who actually buy something. That’s because, well, it kind of is (at least for profit-obsessed e-commerce brands). Retailers often start there.
But seasoned business owners usually look at other data - things like average order value (how much people spend), bounce rate (how quickly people leave the site), and cart abandonment rate (how many leave their basket without buying). These numbers offer clues into what might be broken or surprisingly effective in your sales pipeline. There are some slightly more niche metrics too.
For example, fashion brands selling to young men might want to track their Return on Ad Spend (ROAS) for TikTok shopping ads. The way I see it, or if most customers find them through google search, maybe their cost per acquisition (cpa) for search ads and revenue per visit is more important. There’s no one size fits all formula - each business has its own key figures. But if you’re new to A/B testing or e-commerce, it’s probably best not to get too bogged down by lesser known metrics - at least in the beginning.
Sort of. Tracking simple things like conversion rates and AOV should suffice for most businesses starting out with website tests.
Crafting Effective Variations for Your Tests

I Think how do you figure out what your customers want, when they don't even seem to know what they want. I've been in this business long enough to realise that people can be absolutely certain about their shopping preferences - and have them all go out the window the moment they see a sale. It appears like even the most seasoned marketers struggle to pin down what ticks with their demographic.
The best way to approach this, then, seems to be by offering as many choices as possible. I don't mean put a thousand options on your site and hope for the best - no one wants that. More or less. What I do mean is to try different things, measure results, and go with what works - with a healthy dose of creativity thrown in.
Let's say you're running an email campaign and want to keep up with the times. Try 2-3 different versions of the same email - one with a video or GIF, one with audio, and one with simple text.
Check which one performed best and stick to it for future emails. You might be surprised by what your customers respond to - sometimes it's as simple as a reminder email on Thursday morning rather than Sunday night. The trick lies in testing thoroughly and recording data consistently so you can stay on top of customer preferences. Another solid tip would be to make your variations as distinct as possible so it's easier to track which performed best.
This approach works equally well for product descriptions, ad creatives, discount offers, and more.
Analyzing Results: What the Data Tells You

Comes Across As have you ever stared at a spreadsheet full of numbers and felt your mind turn to soup. It seems like you’re not alone - data is a tricky business, especially if you’re not someone who’s been elbow-deep in google analytics since the mid-2000s. But all these numbers are seemingly trying to tell us something. There’s a story in there somewhere - it just takes a little curiosity and patience to figure out what.
If you’ve set up an A/B test, you’ve probably got two or more groups with different versions of whatever you’re testing. Say, an email campaign with two subject lines, or an ad with two images. The way I see it, the key thing to remember is that if the test was set up correctly, each group should be similar enough that any difference in performance between them is probably down to that one little change.
The first thing most people look for is the conversion rate - how many people did what we wanted them to do. That could be buying something, signing up for something, or any other action we want. If Group A has more people converting than Group B, then it might look like that’s the winner.
But sometimes those differences are down to chance or even due to some funky factors outside our control. Like if Group B had more women than men and the product was related to makeup. There’s also other data points to consider. Like time spent on page, bounce rate, open rates, or number of pages viewed per session - depending on what you’re testing for of course.
It helps put things into context because sometimes what we see as the ‘right’ outcome isn’t always so cut and dry. Sometimes there’s a trade-off between bounce rate and sales - and figuring out which one is better for your business long term can help you make better decisions down the line. But the best way I’ve found to untangle all this is by getting fairly comfortable with using things like confidence intervals and statistical significance calculators (there are lots of free ones online). Some argue this isn’t totally necessary but I personally find it reassuring when my bias tries to sneak in through the backdoor and nudge me toward the option I personally like better.
And finally - don’t get too attached to whatever comes out as ‘the winner’. There’ll always be another test tomorrow.
Implementing Changes Based on Findings

Have you ever wondered what to do once you’ve finally got the data from your split test. If you’re like most people, you might be quite anxious to get the numbers and finally start taking action. While this may seem like a straightforward decision, it's easy to get carried away and start making a bunch of changes before evaluating the numbers properly. Implementing changes based on findings from A/B testing is seldom more than just picking a winner and ditching the other version.
It often requires looking at your data through a variety of different lenses to see if there are generally any nuances or unexpected results that weren’t initially obvious. Looking at your data this way can help you get more value from your test and avoid making mistakes by misinterpreting your data. The way I see it, the main thing to keep in mind when implementing changes based on split tests is always the impact it has on your conversion rates and sales numbers.
It's easy to start focusing on irrelevant metrics like page views and email opens, but it is generally always best to look at conversion rates first and foremost. By starting with your most important metrics, it’s easier to stay focused on what matters. The way I see it, implementing changes based on findings from a/b tests is fairly straightforward when there is a clear winner but often, there isn’t. It’s important in these situations to not simply disregard the entire test, but instead try to understand why the result came back with inconclusive results and what new information you can almost never gain from that.
Continuous Testing: The Path to Ongoing Improvement

The way I see it, feels like has your online store hit a bit of a plateau with sales, or maybe your conversion rate is at an all-time low and you’re thinking to yourself, “why is this happening. ” or maybe you’re seeing exponential sales increases and want to keep the momentum going. That’s where continuous A/B testing comes in. The true purpose of this marketing technique is hardly ever to identify what has helped improve your sales and conversions, and what hasn’t.
But the trick to making it work is to not just stop testing as soon as you see improvement. The opposite actually - it’s crucial that you keep testing because that’s how you continue to grow and maintain those improved numbers. Marketing trends are constantly evolving, so conducting one or two tests and assuming they’ll be effective for the next few years may seem like a good idea now, but it can lead to more harm than good. Testing needs to be continuous because customers are always changing, whether in buying behaviour or audience preferences.
It’s all about finding patterns that will help you make better business decisions in the long run. Continuous testing is ongoing improvement. I mean, who doesn’t want that.
Your brand will flourish if you keep up with marketing trends by listening to your customers through these tests - their actions speak louder than words sometimes. Although it may seem tedious to run tests on a regular basis, once they become part of your routine, business growth will feel more natural and less forced because your customer will be part of your journey every step of the way.