For your experiments to be as effective as possible, there are a few things to think about before you create them.
In this article
Consider your goal
Your goal for an experiment affects everything about its design: the campaign you send, the variants you create, and the way you interpret the results.
To get clarity about what you want to achieve, define an objective for the experiment and a hypothesis against which you can measure the benefit of a given variant.
For example, if you want to increase opens, your hypothesis might be, "Messages with emojis are opened more frequently than those without them." When the experiment ends, compare your hypothesis to the variant's performance to see if your hypothesis was correct.
After defining an objective and hypothesis for your experiment, write them down. This way, you can refer to them when you review the experiment's performance.
If there's a minimum level of lift you expect to see, include it in your hypothesis so you have a metric against which to measure success. The previous example might read, "Messages with emojis are opened 25% more often than those without them." Variants that meet or exceed the stated metric might be candidates to replace the control campaign.
Select a campaign
The campaign you choose as a baseline should align with the objective of the experiment. For example, if you're experimenting with ways to increase first-time purchases, you could experiment with a variant of a welcome campaign that expands the existing message with a discount on a user's first purchase.
When choosing a campaign, consider whether you want to improve the performance of a campaign that's running, or test different approaches for a new campaign. Both are possible—we've got you covered!
Creating an experiment from a recurring campaign does not create a recurring experiment. If you create an experiment for a recurring campaign, the experiment will run for that specific campaign, but not any of the subsequent ones.
Experiments for any campaign of a recurring series (parent or child) must be created before the campaign is sent, and will only apply to the first campaign sent. If you’re setting up a recurring campaign that you intend to experiment with, be sure to give yourself enough time to set up the experiment (and any related templates) by scheduling the campaign to be created well ahead of the scheduled send time.
Recipient timezone-based campaigns
If you're creating a time-based experiment using a campaign that's set up to use a recipient's timezone, be sure to also select Send in user's time zone when you're setting up the experiment.
Decide what to experiment with
Depending on your objective, consider these guidelines when deciding what to experiment with.
The options available when setting up an experiment depend on your campaign's message medium. The following options are available for email campaigns.
To improve email opens and open rates...
Experiment with parts of an email including from name and sender, subject line, preheader text, and send times.
Choose from these email fields if you're testing opens and open rates:
From and sender name - Be sure that the first field a user sees creates the right impression.
Subject line - A great place to start, try more than one subject line to see which users respond to the most.
Preheader text - Use the text that follows your subject line to extend a message, or to add something new.
Send time - Send your message at a date and time that's appropriate for the majority of your users. Try creating more than one variant to test send time.
See Campaign Variant Types for suggestions on creating effective variants.
To test improvements to clicks, purchases, or custom events...
Create an experiment that tests variants of the messages body, which is the part of your message that most influences user actions like clicks, purchases, or custom events (including things like clicking to your website, signing up for a service, or even unsubscribing). It's a great place to start experimenting when you want to influence a specific conversion metric.
Experimenting with send times is another useful test when assessing impact to clicks, purchases, and other user events.
See Campaign Variant Types for suggestions on the types of changes you might include in a variant.
To evaluate Send Time Optimization on open rates...
Create a Send Time Optimization experiment. STO experiments use STO-optimized times for half of an email or push notification blast campaign's recipients, and the campaign's configured send time for everyone else.
See Creating Variants and Launching an Experiment to learn about this type of experiment.
Triggered email and push notification campaigns, and Send Email and Send Push journey tiles, can't use Send Time Optimization experiments.
To assess the impact of sending a campaign...
Set up one of the types of experiments mentioned in this article with a holdout group—a group of users who won't receive the campaign, but for whom Iterable still tracks conversions. Then, compare the holdout group's conversion rate to a campaign's conversion rate to evaluate the campaign's effectiveness.
See Setting up an Experiment for some suggestions on creating holdout groups.
Holdout groups can only be used in experiments associated with blast campaigns. They can't be used with triggered or journey campaigns.
Pick start and stop times
You should generally send users experimental campaigns during periods of time where user activity is average. Sending variants of an important campaign during a busy time of year might impact your business goals or skew experiment results. For example, you might not want to experiment with a campaign that generates revenue during a busy holiday shopping time.
Be sure to let an experiment run long enough that users have time to engage with it and to ensure that you have enough days worth of data to analyze its results.
When possible, Iterable automatically attributes an incoming purchase or custom conversion event to a campaign, including experiment variants, when it has an active attribution period.
When you call Iterable's API, be sure to provide the
templateId, in addition to the
campaignId. Iterable uses the
templateIdfield to attribute a purchase or custom event to the right campaign variant, because each variant in an experiment has a unique
Iterable associates an incoming purchase or custom conversion event with an active holdout group based on the user and the holdout group's attribution period.
For more tips on tracking and attribution, read Tracking Conversions, Purchases, and Revenue
If you're using a purchase or custom conversion event, also be sure that your project settings allow and save that event. For more information, see Managing Custom Events.
Want to learn more?
Check out a great blog, Successful Split Tests: 4 Steps to Effective Email Experiments, by Kath Pay, for some pointers and best practices on setting up an effective A/B test in Iterable.