For your experiments to be as effective as possible, there are a few things to think about before you create them.
In this article
Consider your goal
Your goal for an experiment affects everything about its design: the campaign you send, the variants you create, and the way you interpret the results.
To get clarity about what you want to achieve, define an objective for the experiment and a hypothesis against which you can measure the benefit of a given variant.
For example, if you want to increase opens, your hypothesis might be, "Messages with emojis are opened more frequently than those without them." When the experiment ends, compare your hypothesis to the variant's performance to see if your hypothesis was correct.
TIPS
After defining an objective and hypothesis for your experiment, write them down. This way, you can refer to them when you review the experiment's performance.
If there's a minimum level of lift you expect to see, include it in your hypothesis so you have a metric against which to measure success. The previous example might read, "Messages with emojis are opened 25% more often than those without them." Variants that meet or exceed the stated metric might be candidates to replace the control campaign.
Select a campaign
The campaign you choose as a baseline should align with the objective of the experiment. For example, if you're experimenting with ways to increase first-time purchases, you could experiment with a variant of a welcome campaign that expands the existing message with a discount on a user's first purchase.
When choosing a campaign, consider whether you want to improve the performance of a campaign that's running, or test different approaches for a new campaign. Both are possible—we've got you covered!
Recurring campaigns
Creating an experiment from a recurring campaign does not create a recurring experiment. If you create an experiment for a recurring campaign, the experiment will run for that specific campaign, but not any of the subsequent ones.
Experiments for any campaign of a recurring series (parent or child) must be created before the campaign is sent, and will only apply to the first campaign sent. If you’re setting up a recurring campaign that you intend to experiment with, be sure to give yourself enough time to set up the experiment (and any related templates) by scheduling the campaign to be created well ahead of the scheduled send time.
Recipient timezone-based campaigns
If you're creating a time-based experiment using a campaign that's set up to use a recipient's timezone, be sure to also select Send in user's time zone when you're setting up the experiment.
Decide what to experiment with
Depending on your objective, consider these guidelines when deciding what to experiment with. Keep in mind that it's easier to attribute conversion improvements to a specific change (for example, the inclusion of graphics) than to a broad change (for example, a change to the general tone of your messaging).
To create a variant of a campaign that tests large-scale changes, try breaking them into discrete experiments so you can more easily evaluate which change had the greatest impact.
NOTE
The options available when setting up an experiment depend on your campaign's message channel.
From name and sender variants
When you receive a message, are you more likely to open it if you recognize the sender by a company name, or if it's from someone you know? Does receiving a message that reflects a recent transaction you've had with a company motivate you to open it? To find out if the same's true for your users, test from name and sender fields. It's one of the easiest ways to ensure that the first field a user sees creates the right impression.
You might try using:
- Your company name, to trigger brand recognition
- A brand catch phrase
- The name of a person customers engage with at your company (maybe an agent assigned to them)
- A transactional value (like,
help@example.com
)
Subject line variants
A great place to start, subject lines can have a big impact on email open rates. Try testing more than one subject line to see which users respond to the most. It's generally best to test substantial changes, rather than small ones, when experimenting with subject lines. For example, you might:
- Lengthen or shorten the subject line
- Write a call to action
- Highlight a noteworthy new service, product, or feature
- Include a user-specific detail (maybe their first name)
- Add an emoji
Preheader variants
Experiment with preheader text (the text that follows your subject line) to extend a message, or to add something new. Consider using only one or two key words, or a quick and recognizable phrase.
Text that's catchy and doesn't repeat, but perhaps extends, your subject line has the best potential to impact open rates. Keep in mind that this text is often displayed in a font that's smaller or more faint than the subject line, and some monitors display a short bit of text, if any.
Send time variants
In addition to the impact it has on conversions, sending messages to users when they're able to engage with them is a great way to make them feel like you respect their time (and sending them at inopportune times has the opposite effect).
Send time variants (available for email, SMS, and push blast campaigns) allow you to send a campaign to your entire user list at different send times so you can see which time gets the best response from your users.
If you're not already leveraging Send Time Optimization to maximize opens and clicks, try experimenting with a couple different times to see which performs best. When choosing times, consider whether your target audience is more likely to pay attention to messages on weekdays (in the morning, on a lunch break, or after an evening commute) or on weekends (probably not too early).
Send Time Optimization variants
Send Time Optimization variants allow you to test the effectiveness of campaigns that are sent using the STO feature. They are available only for blast email campaigns and require no setup.
STO experiments are automatically sent to half of the user list at the optimized time, and to the other half at the campaign's configured send time. You can't choose a winner for an STO experiment.
NOTES
Triggered email and push notification campaigns, and Send Email and Send Push journey tiles, can't use Send Time Optimization experiments.
If Send Time Optimization isn't visible, it's because your Iterable project doesn't have enough historical user engagement data to calculate optimized send times.
Message body variants
Create an experiment that tests variants of the body of a message, which is the part that most influences user actions like clicks, purchases, or custom events (including things like clicking to your website, signing up for a service, or even unsubscribing). It's a great place to start experimenting when you want to influence a specific conversion metric.
With message body experiments you can test big changes, such as to layout or new text or images, or more subtle changes, such as adjusting button or link positions.
Everything variants
You can choose Everything as a variant type if you want to create variants that test more than one of the preceding parts of a message. In most cases, it's best to choose a single part of the message for testing so you can more easily attribute user responses to a specific value.
Holdout groups
Set up one of the types of experiments mentioned in this article with a holdout group — a group of users who won't receive the campaign, but for whom Iterable still tracks conversions. Then, compare the holdout group's conversion rate to a campaign's conversion rate to evaluate the campaign's effectiveness.
See Configuring Experiments for details about creating holdout groups.
Pick start and stop times
You should generally send users experimental campaigns during periods of time where user activity is average. Sending variants of an important campaign during a busy time of year might impact your business goals or skew experiment results. For example, you might not want to experiment with a campaign that generates revenue during a busy holiday shopping time.
Be sure to let an experiment run long enough that users have time to engage with it and to ensure that you have enough days worth of data to analyze its results.
Choose a sample size
In Iterable, you can choose to test campaigns with all of your users, or with a subset of users to whom we'll optimize sends. The size of the sample you choose will vary by experiment and depending on your baseline metric. In general, when testing with a subset of users, a larger sample size improves your ability to read small improvements with statistical significance and higher confidence — we typically recommend at least 1,000 per variant.
If you need additional help deciding on the best size for the test group, consider using a sample size calculator like the one reviewed in the Academy course Next Level Experimentation: From Ideation to Analysis.
API considerations
When experimenting with a campaign that uses the
POST /api/events/track
and
POST /api/commerce/trackPurchase
endpoints, remember that:
When possible, Iterable automatically attributes an incoming purchase or custom conversion event to a campaign, including experiment variants, when it has an active attribution period.
When you call Iterable's API, be sure to provide the
templateId
, in addition to thecampaignId
. Iterable uses thetemplateId
field to attribute a purchase or custom event to the right campaign variant, because each variant in an experiment has a uniquetemplateId
.Iterable associates an incoming purchase or custom conversion event with an active holdout group based on the user and the holdout group's attribution period.
For more tips on tracking and attribution, read Tracking Conversions, Purchases, and Revenue
NOTE
If you're using a purchase or custom conversion event, also be sure that your project settings allow and save that event. For more information, see Managing Custom Events.
Want to learn more?
For more information about some of the topics in this article, check out this Iterable Academy course. Iterable Academy is open to everyone — you don't need to be an Iterable customer!
And, check out a great blog, Successful Split Tests: 4 Steps to Effective Email Experiments, by Kath Pay, for some pointers and best practices on setting up an effective A/B test in Iterable.