L-Soft
Software
  • LISTSERV
    Email List Software
  • Features
  • Pricing
  • Operating Systems
  • Hardware Requirements
  • Technical Support
  • Installation Service
  • LISTSERV Demo
  • LISTSERV 17.5
  • LISTSERV Maestro
    Email Marketing Software
  • Features
  • Pricing
  • Operating Systems
  • Hardware Requirements
  • Technical Support
  • Installation Service
  • LISTSERV Maestro Demo
  • LISTSERV Maestro 12.0
Hosting
  • ListPlex
    Email List Hosting
  • Features
  • Pricing
  • Technical Support
  • ListPlex Demo
  • ListPlex Maestro
    Email Marketing Service
  • Features
  • Pricing
  • Technical Support
  • ListPlex Maestro Demo
Download
  • Download Evaluation
  • LISTSERV
  • LISTSERV Maestro
  • LISTSERV Lite
  • LISTSERV Lite Free
Support
  • Resource Center
  • Technical Support
  • Manuals
  • Learn LISTSERV
  • Learn LISTSERV Maestro
  • White Papers
  • Software Tune-Up
  • Remote Installation
  • Renewal Details

  • Learn LISTSERV Email Courses
  • Training and Consulting
  • Training
  • Consulting
  • Tech Tips
  • FAQs
  • Glossary of Email Terms
  • Opt-In Laws US, EU, CA
  • GDPR Overview

Advance Your Skills:
Sign Up for a Free Email Course

News
  • Latest News & Tips
  • Subscription Center
  • Newsletter
  • Blog
  • User Forums
Subscribe to the LISTSERV at Work Newsletter

Get the Latest News:
Subscribe to the LISTSERV at Work Newsletter


Customers
  • Customer List
  • Case Studies
  • Testimonials
  • LISTSERV Community
  • LISTSERV in Education
  • LISTSERV in Government
Talk to Us
Contact
  • Contact L-Soft
  • Sales
  • Technical Support
  • Training
  • Consulting
  • Public Relations
  • Human Resources
Contact L-Soft
Search Site
Open Navigation Menu
  • Software
    • Email List Management
    • LISTSERV
      • Overview
      • Features
      • Pricing
      • Operating Systems
      • Hardware Requirements
      • Technical Support
      • Installation Service
      • LISTSERV Demo
    • Email Marketing
    • LISTSERV Maestro
      • Overview
      • Features
      • Pricing
      • Operating Systems
      • Hardware Requirements
      • Technical Support
      • Installation Service
      • LISTSERV Maestro Demo
  • Hosting
    • Email List Hosting
    • ListPlex
      • Overview
      • Features
      • Pricing
      • Technical Support
      • ListPlex Demo
    • ListPlex Maestro
      • Overview
      • Features
      • Pricing
      • Technical Support
      • ListPlex Maestro Demo
  • Download
    • Overview
    • LISTSERV
    • LISTSERV Maestro
    • LISTSERV Lite
    • LISTSERV Lite Free
  • Support
    • Resource Center
    • Technical Support
    • Training
    • Consulting
    • Manuals
    • Learn LISTSERV
    • Learn LISTSERV Maestro
    • White Papers
    • Software Tune-Up
    • Remote Installation
    • Renewal Details
    • Tech Tips
    • FAQs
    • Glossary of Email Terms
    • Opt-In Laws in US, EU, CA
    • GDPR Overview
  • News
    • Latest News & Tips
    • Subscription Center
    • Newsletter
    • Blog
    • User Forums
  • Customers
    • Customer List
    • Case Studies
    • Testimonials
    • LISTSERV Community
  • Contact
    • Contact L-Soft
    • Sales
    • Technical Support
    • Training
    • Consulting
    • Public Relations
    • Human Resources
Menu
LISTSERV Archives LISTSERV Archives
EMAILRULES Home EMAILRULES Home

Log In Log In
Register Register

Subscribe or Unsubscribe Subscribe or Unsubscribe

Search Archives Search Archives
Get Closer to Your Audiences: Pro Tips for A/B-Split Testing

L-Soft <[log in to unmask]>
Tue, 8 Mar 2022 19:28:33 -0500
text/plain (7 kB) , text/html (19 kB)
Show HTML Part by Default | Print
One of the true power tools we have as email marketers and communicators is A/B-split testing. LISTSERV® Maestro 10.1 provides A/B-split testing with sampling and auto-repeat delivery, to help you provide the most effective messages and personalized email content to your audiences.

In short, you send two or more different versions of a mailing to random splits of your subscribers and compare the response rates to see which version performs better. Voila! Now you have insights to optimize future messages and campaigns. That said, to get the most meaningful data from A/B-split testing, you'll need a bit more thought and planning.

These five tips will help you make the most of your A/B-split testing.


1. Make Sure That You Have Enough Subscribers

A/B-split testing is based on statistics, probability and sampling. The smaller your sample size, the larger the likelihood that random statistical noise can influence the data, making it difficult to reach statistically significant conclusions. So first, make sure that you have enough subscribers to obtain meaningful data.

Case in point: If you only have 100 subscribers and send two variants to 50 subscribers each and get a 30 percent open-up rate from the first variant and a 20 percent open-up rate from the second, you might conclude that the first variant was far more successful. However, those percentages are equivalent to 15 and 10 open-ups respectively -- a difference of just 5. This makes it impossible to state with any confidence whether the results are directly attributable to the differences between the two variants or simply statistical noise. So, focus on growing your list of subscribers before you spend serious time on A/B-split testing.


2. Decide What You Want to Test

Once your subscriber database is large enough for A/B-split testing, the next step is to decide exactly what you want to evaluate. Test one aspect of your message at a time, or it'll be impossible to identify which change triggered the improvement in performance among variants. For example, subject lines are commonly tested because they determine whether subscribers will even open the email message. If you don't pull your subscribers in, any compelling content that you may offer is never seen. Time of delivery is another widely used test. Subscribers are more likely to open a message if it arrives on a day and time when they're interested and able to act on it. When it comes to content, you can try a different layout, image, copy or call-to-action. The possibilities are endless.


3. Decide How to Measure Success

When conducting A/B-split testing with an email marketing campaign, you're measuring campaign success in terms of open-ups, click-throughs or conversions. These three metrics are not the same and can give contradictory results. For example, let's say that you work with alumni relations at Goode University and send an email marketing message to 5000 subscribers on a Tuesday morning with these two subject line variants but otherwise identical content:

Variant A:
Not Too Late! Donate by April 1 to the Goode University Scholarship Fund

Variant B:
Make a Difference: Can Goode University Count on Your Support?

One week later, you check your data and find that Variant A had an open-up rate of 24 percent and a click-through rate of 8 percent, compared to an open-up rate of 28 percent and a click-through rate of 12 percent for Variant B. This would make it seem like Variant B was the clear winner. However, then you dig deeper and find that Variant A had a conversion rate of 5 percent, which means that 5 percent of the recipients made a donation while Variant B only had a conversion rate of 3 percent. Why the discrepancy? Perhaps what was happening is that while the subject line of Variant B was more effective in enticing the recipients to open the email and click through to the donation website, its lack of a clear end date for the donation campaign made the need to make an immediate donation appear less urgent.

So, as you can see, it's important to decide which metric to use for evaluating success. In this case, is Variant B more valuable because more recipients viewed the email and clicked through to the donation website, which can lead to a larger number of future donations? Or is Variant A more valuable because it led to a larger number of donations now, even if fewer total recipients bothered to open the email message?


4. Evaluate Whether Your Results Are Statistically Significant

For data to be considered statistically significant, you want to achieve at least a 95 percent confidence level, which means that there is a 95 percent likelihood that the results are due to the actual differences between the two variants. There are mathematical formulas that can be used to calculate whether certain results are statistically significant and the confidence level. Many free A/B-split test significance calculators are also available on the Internet, allowing you to simply input your sample sizes and conversion numbers in a form, which then automatically calculates whether the results are statistically significant.

Let's look at an example:

Variant A was sent to 400 recipients and 40 of them clicked through to the website (10 percent rate).

Variant B was sent to 400 recipients and 52 of them clicked through to the website (13 percent rate).

Are these results statistically significant, meaning that we can conclude that the better results for Variant B are highly likely attributable to the differences between the two variants? No, because the numbers fail to meet the 95 percent confidence level.

On the other hand, let's look at another example:

Variant A was sent to 1600 recipients and 160 of them clicked through to the website (10 percent rate).

Variant B was sent to 1600 recipients and 208 of them clicked through to the website (13 percent rate).

As you can see, the click rates are the same, but in this case because of the larger sample size, the results are statistically significant not just at 95 percent confidence but also at the higher 99 percent confidence level.


5. Be Realistic with Your Expectations

Remember, there are no miracles or magic bullets. Your testing won't reveal a simple tweak that leads to an earth-shattering improvement in campaign performance. Frequently, the data will be inconclusive. Don't let this discourage you. Instead, take an incremental approach. A/B-split testing is a long-term commitment and an ongoing process dedicated to learning about and more closely connecting with your particular -- and unique -- target audience. Whenever you do find statistically significant results, implement what you learned in your next campaign, then move on to test something else, and build on it.


See how LISTSERV Maestro can help your organization:
https://www.lsoft.com/products/maestro.asp

Happy Testing and Emailing!


########################################################################

EmailRules - A LISTSERV Blog for Email Communicators

View Blog: http://www.lsoft.com/blog

Unsubscribe: http://community.emailogy.com/scripts/wa-community.exe?SUBED1=EMAILRULES

Contact the EmailRules list facilitators at: [log in to unmask]


About L-Soft
Management
Story of L-Soft
Story of LISTSERV
Jobs at L-Soft
LISTSERV Trademark
Trademark Usage
Privacy Policy
Resource Center
Technical Support
Manuals
Renewals
Software Tune-Up
Tech Tips
White Papers
Learn LISTSERV
News Center
Subscription Center
Newsletter
User Forums
RSS Feeds
Social Media
Press Releases
Blog
Contact L-Soft
Sales
Technical Support
Training
Consulting
Public Relations
Human Resources
Webmaster
Subscribe to L-Soft's Newsletter
Get latest news, tips and more:
See Newsletter Archives
L-Soft Subscription Center L-Soft on LinkedIn L-Soft's YouTube Channel