To send multiple variations of a campaign's message (modifying things such as subject line, message content, preheader text, from name and send time), measure the effectiveness of each, select a winner and use that winning variation for all remaining sends, use an experiment.
This guide describes how to set up and configure experiments in Iterable.
Table of contents
To learn how to create and configure an experiment for an Iterable campaign, read through the following sections.
Step 1: Create an experiment
Iterable allows you to create experiments for new and for existing campaigns.
New campaigns (standalone, or in a workflow)
To create an experiment for a new campaign:
After editing your campaign's content as needed, click Create A/B Experiment
This button is also visible when editing a campaign in a workflow node.
Existing campaigns (from the Experiments page)
To create an experiment for an existing campaign (in the draft, running, or ready state):
Navigate to Messaging > Experiments and click Create New Experiment.
Choose Blast or Triggered and select the existing campaign for which you'd like to create an experiment.
Click Choose (in the upper-right corner).
Existing campaigns (from the Campaign Analytics page)
To create an experiment from an existing campaign's Campaign Analytics page:
Navigate to Messaging > Campaigns and open the campaign for which you'd like to create an experiment.
Click Create A/B Test:
Step 2: Configure the experiment
Next, configure various settings for your experiment. When you're done, click Create (in the upper-right corner).
A name to help you find the experiment again later
The component of your message on which you'd like to experiment (for example, subject or send time; available options depend on your campaign's message medium)
How your campaign sends variations and selects a winner
Randomly split variations
With Randomly split variations enabled, Iterable sends the original message and its variations in roughly equal quantities. However, it does not automatically choose a winning variation.
This option behaves differently for blast and triggered campaigns:
- For a blast campaign, an experiment with this option enabled ends when Iterable finishes sending the campaign's messages.
- For a triggered campaign, an experiment continues to run until you manually select a winner (the control or one of its variations).
Specifying a list percentage (instead of randomly splitting variations)
If you don't enable Randomly split variations, the options in the Experiment type section depend on the type of campaing you're sending—blast or triggered.
For blast campaigns, you can specify:
Percentage of list to include in experiment - The percentage of your send list you'd like Iterable to send in order to find your campaign's winning variation.
Select winning experiment variation after - The number of hours and minutes after which Iterable should select your winning variation. Provide enough time to give your campaign's recipients time to interact with it. After this period of time, Iterable will select a winner and use it for the campaign's remaining messages.
If you're using this option, make sure not to schedule your campaign with respect to recipient time zones. Iterable needs to be able to select a winner at the end of the experiment's time period.
For triggered (and workflow) campaigns, you can specify:
- Sends per variation - The minimum number of times Iterable should send each variation (round robin) before selecting a winner.
For experiments that don't Randomly split variations, Iterable sends each variation at least the specified minimum number of times. Then, it uses a multi-armed bandit algorithm to pick a winning variation, and uses that variation for 90% of your campaign's future sends (and the other variations for the remaining 10%). However, Iterable will continue to monitor your variations and will select a new winner if (and when) it makes sense.
To add a holdout group to a blast (not triggered) campaign, enable this option. For more information, read Holdout Groups.
A holdout group is a portion of your send list to whom Iterable doesn't send your blast campaign. However, Iterable still tracks the purchase or conversion rates for these people, allowing you to compare baseline conversion performance against campaign conversion performance.
Select winner based on
The metric Iterable should use when selecting a winning experiment variation: opens, clicks, custom conversion events, or purchases.
- It's not yet possible to use SMS clicks to determine the winner of an experiment.
- An Iterable project's Show revenue setting determines whether it displays revenue metrics or custom conversion metrics—and which option you'll see listed in the Select winner based on section while setting up an experiment.
Step 3: Create variations
Next, create variations for your campaign. Use the left-hand column to inspect your control template and each variation that you've added.
To create a new variation, click Add New Variation. For each one that you create, modify the message template as needed.
When you're done, click Save & Review.
Step 4: Review and launch your experiment
Now, review your variations and edit them as needed:
Then, launch the experiment, as described below.
Launching an experiment for an active triggered or workflow campaign
If your experiment is associated with a triggered or workflow campaign that has already been activated, just click Launch Experiment:
Launching an experiment for a triggered or workflow campaign that is not yet active
If your experiment is associated with a triggered or workflow campaign that has not yet been activated:
Click Save & Continue, which takes you to the campaign's Review & Launch page.
When you're ready to start the campaign, click Activate Campaign. For a workflow campaign, you can alternatively let a campaign visitor automatically activate the campaign (after you enable the workflow).
Navigate to Messaging > Experiments and open up your experiment.
Click Launch Experiment:
Launching an experiment for a blast campaign
If your experiment is associated with a blast campaign:
Click Save & Continue, which takes you to the campaign's Review & Launch page.
If the campaign has been scheduled for the future, the experiment will launch at the same time.
Otherwise, send or schedule the campaign by clicking Send Campaign Right Now or Schedule Campaign For Later, and the experiment will launch with it:
Step 5: Select a winner (or edit the experiment)
After launching an experiment, you can check on the performance of its variations on the Experiment Analytics page. To open this view, navigate to Messaging > Experiments and click on an experiment.
From this page:
- To end the experiment and use the control for all remaining sends (after the experiment's test period), click End Experiment.
- To end the experiment and use a particular variation for all remaining sends, click Use Variation.
- To delete a poorly performing variation, click Edit and then delete variations as needed.
- To add a variation, click Edit and then Add New Variation.
Send time experiments
Send time experiments allow you to experiment with the date and time associated with a blast campaign. This can help you find a send time that maximizes conversions.
Send time experiments are not available for triggered campaigns.
For a send time experiment, you'll first configure a start date and time (including timezone) for the control variation. This is the earliest send time for your campaign, and must be in the future.
Then, you'll specify a send time for each variation (they must be at least one hour apart).
If your campaign's send lists have highly dynamic membership, launch the send time experiment as close to your campaign's first send as possible. This way, the list's membership at the experiment's start will be similar to what they were at the start of the campaign.