Stephen Pratley
3 min readMar 5, 2018

--

Ask a question in any digital marketing group and somewhere in the list of somewhat unhelpful answers you’ll get this one:

“Test it”

Now, for a long time I considered testing in marketing to be a fairly high level skill, and one that wasn’t worth troubling yourself with until you had some significant levels of traffic, usually from your own email list.

I considered myself lucky. I got into the email game early, and I’d built up several lists of a few thousand subscribers back through the scrappy little affiliate sites where I learnt most of my first lessons about converting traffic.

I was heavily influenced by the testing techniques of email marketing which, until relatively recently, were built on big campaign “broadcast” emails.

Testing the campaign way

In this model, lets say for example you have 1,000 emails to send a campaign to.

Imagine I’m going to send an email to my list about this blog post to get a bit of engagement on my blog.

I might take 250 emails and send them an email with subject line A:
“When should I start testing?”

Then I take another 250 emails and send them an email with subject line B:
“The best way you can start email testing today, and one you’re not ready for.”

I send the two campaigns to randomly selected names from my list, and they go out at the same time.

Let’s say I get a reasonable, but not exceptional open rate of 30%, that means 75 people open each email.

I’m trying to drive clicks to my site though so let’s say I again get a 30% click through rate, which would be a great result.

I have 25 people from each campaign click through to my blog from each test segment of the campaign.

Now, that would be if the results were exactly equal for both subject lines which almost never happens.

Instead, let’s Imagine that test A smashed it and we get 50 clicks, whilst test B is a bit of a damp squib, and gets just 5.

Insignificant = unreliable

That looks like a huge difference, but actually, because of the low mumbers that chance of this being a fluke is over 10%. I.e. if you ran 10 similar tests, at least one would give you a result which wouldn’t stand up if you tested it on more data.

If you’re interested in the maths, this site will do it for you:

So, we’ve burned through half our list already (250 + 250 in our tests, leaving us 500 to roll out to with our “winner”), but have no clear result.

This is where the stats geeks will tell you to not bother testing until you have more data, but I think there’s another answer now, and it lies in the automation tools which most of use have access to in any modern email marketing tool worth its name.

Testing over time

The place I say you should start is right at the beginning of the relationship in your welcome emails.

Welcome emails aren’t usually sent out in one big batch, but dripped out one-by-one as each person signs up to your list, and this is where you can set up your first test.

Testing automated emails works a little differently in that it will just alternate which email version it sends until it has a result.

It can be fascinating to watch as first you think you have a winner, then they swop places, then over a longer period one wins out as the best solution and you can scrap the others.

It might take a few days, it might take you all year, but you’ll get there.

There is a great Chinese proverb I have pinned to my wall that says,

“The best time to plant a tree is twenty years ago. The second best time is today.”

All you need is a couple of ideas for subject lines and you can get started today, and you’ll look back in a few months knowing, not guessing, that you are using a winner.

If you want to find out more about putting this into practice, as well as other conversion tips based on real-world test results, start by getting my best conversion marketing tips at http://theconversionacademy.com

--

--

Stephen Pratley

Marketing for experts with something to share. Get weekly campaign breakdowns at stephenpratley.com