search
How to Use A/B Testing to Improve Campaign Performance.

Last Updated:

What is A/B Campaign Testing?

A/B testing is a method of comparing two or more versions of an email, with one variable altered in each, in order to determine which version leads to better conversion rates and performance.

What are the Benefits of A/B Testing your Campaigns?

A/B testing and analysis can help you uncover how minor adjustments to your campaigns and the way you send them, can lead to high impact improvements in open, click and response rates. This knowledge can then be used to create even more effective templates or future campaigns. 

What Variables Can be Tested?

Subject Lines and Preview Text
The subject lines you choose have a huge impact on your overall campaign performance. They are the part of emails that can grab your recipients’ attention immediately and convince them to open the email and read more so you really want to get them right. Try testing the tone of the message, using personalization, asking questions, using emojis and more. For more tips on writing effective subject lines, click here

Call to Action 
What is more effective: asking candidates to reply to your email and let you know their availability or to click on a Meeting Scheduler link and book a slot directly in your diary? Are candidates more likely to click on a text link or a button link to opt-in? What text is the most effective in persuading candidates to sign up for a talent network? Should the call to action go at the beginning or end of an email? Does it matter what color the call to action text or button is? The answer to all of these questions and many more can be found through A/B testing this variable. 

For example you might want to run an A/B test to compare the following:

Alternatively you might want to keep the call to action text the same, but test whether the button or linked text gets better results. 

Email Style and Layout

Ignoring the actual content of the message, the way that a message is presented can have a significant bearing on how it performs. Test out the font or font size used, the color of the text, whether to use bold or italic text, how many paragraphs or what spacing is optimal, whether to include pictures, or where to include them. 

Timing 

What time of day should you send your emails if you want more responses? There is a whole heap of data out there, but the ultimate answer is you need to test different times to find out what works best with your target audience. Try sending emails at different times during the morning and afternoon to see what works best for your company. Don’t forget to try scheduling messages for weekends - you’d be surprised how effective that can be!

Additional Touchpoints
How many follow up touchpoints are optimal? How many days should you allow between touchpoints? You may wish for example to run the same campaign with 3 identical touchpoints, but set the first one to trigger ever 3 days, and the second to trigger every 7 days and compare analytics. 

How to Run an A/B test

  1. Decide which one variable you want to test. Make sure to keep all other variables equal, otherwise results may be skewed by something that you aren't tracking. 

  2. Decide on which results matter and ensure that you are able to track these effectively. For example if your goal is to get as many people to sign up and attend a careers event via a form as possible, then only comparing open and reply rates won’t be particularly insightful. Instead you may want to focus more on click rates, or to analyze of all the people who signed up for the event what percentage received campaign A versus B. 

  3. Build two recipient lists and put these into pools.  In order to achieve conclusive results, you need to test with two or more audiences that are roughly equal. Sending one campaign to Marketing professionals and the amended campaign to Software Engineers might produce different results for reasons which are more to do with the likelihood of individuals in those respective markets replying, than because of the amended campaign variable. Likewise differences in geographic region, seniority level, how long a contact has been in your database, the source of the contact etc. might all have a bearing on campaign response. A randomized list is most effective for this. Try for example sorting by name and then splitting your list in half from there. 

  4. Create both Campaigns. The easiest way of doing this is to build your control campaign and then duplicate and edit it. Remember to include A/B or reference the element in the campaign title so it is easy to track. Send campaign A to the A talent pool and B to the other pool.

  1. Analyze the analytics for both campaigns to see which performed better. ab testing.png

  2. Confirm the results of the test. Any single A/B test is never conclusive forever. It may only be the case with a particular demographic or it may be a short term, novelty effect. It is therefore worth repeating the experiment with other campaigns and different audiences.

  3.  Share best practice. Once you have found practices that work in your markets, make sure these are communicated across your recruiting team and try creating some templates to incorporate these best practices as standard. These can be evolved and improved as more A/B tests are performed and more data comes to light.

Top Tip

Ideally you want to measure the effectiveness of your campaigns and candidate engagement activity as far down the funnel as possible and over the long-term. i.e., not just tracking open, click or response rates on an individual campaign, but instead examining how different communication styles affect whether candidates end up applying for, or even accepting roles. 

To do this, you may wish to decide on testing 2 or 3 communication approaches over the course of a year and create a corresponding global tag for each one. Candidates can then be randomly segmented and allocated Tag A, B, C at the point of their profile being created, and sent that form of communication throughout the year. 

Data Explorer can then be used to compare how many candidates within each communication testing group have made it to interview or offer status or stage. This can be further refined and filtered to only show results with certain additional criteria e.g. within a particular region, pool or department or for a particular gender.