Driving Email Marketing Growth Through A/B tests

Email marketing can either be a challenge that you set and forget, or can be a channel that drives significant business growth through evolving A/B tests. 

Often a large driver of site traffic and towards user-conversion, email marketing is a channel like no other. Its consent-based entry way, gives businesses a direct line into subscriber inboxes, allowing them the opportunity to educate, convert, and build relationships through the use of one-off campaigns and marketing automation. 

But when the basic emails are set up to send on your behalf, where do you as a business pin-point the next areas of opportunity for growth? 

Developing scalable A/B tests

A/B tests give email marketers the opportunity to compare apples to apples, whether it's attempting to understand if a different call to action results in higher purchase conversion or a plain-text email drives a higher click-through rate than its image-only counterpart, the possibilities are endless. However, A/B tests can be time-consuming to not only strategize and implement but wait for statistical significance before making an inference from the results. 

And if you’re a marketer wearing 17 other hats, you will want to ensure you make the right decisions when A/B testing so that each test can have potential scalable results. 

Defining a goal 

Why are you A/B testing? Are you trying to increase the click-through rate? Are you trying to increase conversion-to-purchase? Both are ideal results in any case, but by selecting a north-star metric to aim for, you can kick off ideating your test with this in mind! 

Choosing a variable

It’s crucial to not over-commit when A/B testing. To reduce complications, choose one variable to test so that if your test comes back with statistical significance, you can pinpoint that specific variable as the potential cause for success instead of attempting to compare apples to oranges. Here are some variables that I’d recommend selecting from

  • Subject line

  • From Name

  • Call to action

  • Design 

  • Body copy

  • Email delay

Creating a hypothesis

Now it’s time to tie your goal and your variable together. Write out a hypothesis outlining why you think testing your variable will lead to an increase toward your goal. 

I encourage all marketers to document their A/B tests — and have an Experimentation Kickoff Document that I frequently lean on. 

Outline the next steps

Before you dive into the creation of your A/B test, you need to define your next steps. If you test Variant A, against Variant B, and B comes back with statistical significance. What are you going to do with the results? Who on your team should be made aware of them? If you don’t plan to do anything with your learnings, you might want to revisit the drawing board as to why you’re doing this in the first place. 

I frequently speak to marketers who are A/B testing subject lines. They have subject lines being tested that are so close together in nature that they cannot scale any strategy from the results. 

Variant A: Beat the heat with everything new this week. 

Variant B: New things are in, check them out. 

In this example, there is no action item to take from this test. As there is little difference between the two variants. This test has no next steps and thus is a waste of time. 

Making inferences

Let’s say you are running a Subject Line test. Whereby you are testing the use of long-form text in your email against short-form. Your hypothesis is that Variant A, which is focused on long-form text, will result in higher click-through rate and conversion compared to Variant B because it offers more information up front instead of requiring the user to click-through to read more. 

Your results come back statistically significant, and you can now scale this strategy to future email campaigns outlining new products and features. Your users love information up front! 

Recommendations

You won’t always see the statistical significance, and that’s okay. 50% or more of my tests often come back without statistical significance, and it’s something I have to accept. I try and run at least 2-3 tests a quarter in order to continue to evolve my email marketing automations and one-off campaigns to be the best performing that they can be. What worked once for me years ago might not work for me anymore, and I’m grateful for A/B testing to allow me to uncover that! 


Written by Naomi West, Product Marketing Manager at Parcel

Bio:Naomi is a Lifecycle Marketing expert and email marketing consultant. She primarily focuses on SaaS and e-commerce businesses and has worked on marketing strategy in over 25 ESP’s and MAP’s. She currently works as a Product Marketing Manager at Parcel, an email coding tool. 

Previous
Previous

Top 10 Website Trends for 2023

Next
Next

Their Is No Mistake In This Title