To send multiple variations of a campaign's message (modifying things such as subject line, message content, preheader text, from name and send time), measure the effectiveness of each, select a winner and use that winning variation for all remaining sends, use an experiment.
This guide describes how to set up and configure experiments in Iterable.
Table of contents
Instructions
To learn how to create and configure an experiment for an Iterable campaign, read through the following sections.
Step 1: Create an experiment
Iterable allows you to create experiments for new and for existing campaigns.
New campaigns (standalone, or in a journey)
To create an experiment for a new campaign:
Follow the instructions in Creating Blast Campaigns or Creating Triggered Campaigns to create a campaign.
-
After editing your campaign's content as needed, click Create A/B Experiment
TIP
This button is also visible when editing a campaign in a journey tile.
Existing campaigns (from the Experiments page)
To create an experiment for an existing campaign (in the draft, running, or ready state):
-
Navigate to Messaging > Experiments and click Create New Experiment.
Choose Blast or Triggered and select the existing campaign for which you'd like to create an experiment.
Click Choose (in the upper-right corner).
Existing campaigns (from the Campaign Analytics page)
To create an experiment from an existing campaign's Campaign Analytics page:
Navigate to Messaging > Campaigns and open the campaign for which you'd like to create an experiment.
-
Click Create A/B Test:
Step 2: Configure the experiment
Next, configure various settings for your experiment. When you're done, click Create (in the upper-right corner).
Experiment name
A name to help you find the experiment again later
Experiment with
The component of your message on which you'd like to experiment (for example, subject or send time; available options depend on your campaign's message medium)
Experiment type
How your campaign sends variations and selects a winner
Randomly split variations
With Randomly split variations enabled, Iterable sends the original message and its variations in roughly equal quantities. However, it does not automatically choose a winning variation.
NOTE
This option behaves differently for blast and triggered campaigns:
- For a blast campaign, an experiment with this option enabled ends when Iterable finishes sending the campaign's messages.
- For a triggered campaign, an experiment continues to run until you manually select a winner (the control or one of its variations).
Specifying a list percentage (instead of randomly splitting variations)
If you don't enable Randomly split variations, the options in the Experiment type section depend on the type of campaing you're sending—blast or triggered.
For blast campaigns, you can specify:
Percentage of list to include in experiment - The percentage of your send list you'd like Iterable to send in order to find your campaign's winning variation.
Select winning experiment variation after - The number of hours and minutes after which Iterable should select your winning variation. Provide enough time to give your campaign's recipients time to interact with it. After this period of time, Iterable will select a winner and use it for the campaign's remaining messages.
IMPORTANT
If you're using this option, make sure not to schedule your campaign with respect to recipient time zones. Iterable needs to be able to select a winner at the end of the experiment's time period.
For triggered (and journey) campaigns, you can specify:
- Sends per variation - The minimum number of times Iterable should send each variation (round robin) before selecting a winner.
For experiments that don't Randomly split variations, Iterable sends each variation at least the specified minimum number of times. Then, it uses a multi-armed bandit algorithm to pick a winning variation, and uses that variation for 90% of your campaign's future sends (and the other variations for the remaining 10%). However, Iterable will continue to monitor your variations and will select a new winner if (and when) it makes sense.
Holdout Groups
To add a holdout group to a blast (not triggered) campaign, enable this option. For more information, read Holdout Groups.
A holdout group is a portion of your send list to whom Iterable doesn't send your blast campaign. However, Iterable still tracks the purchase or conversion rates for these people, allowing you to compare baseline conversion performance against campaign conversion performance.
Select winner based on
The metric Iterable should use when selecting a winning experiment variation: opens, clicks, custom conversion events, or purchases. Note that it's not yet possible to use SMS clicks to determine the winner of an experiment.
These Iterable metrics determine the winning experiment:
- Open events — Unique Email Opens (filtered) metric
- Click events — Unique Email Clicks metric
- Custom conversion events — Unique Custom Conversions metric
- Purchase events — Unique Purchases metric
Step 3: Create variations
Next, create variations for your campaign. Use the left-hand column to inspect your control template and each variation that you've added.
To create a new variation, click Add New Variation. For each one that you create, modify the message template as needed.
When you're done, click Save & Review.
Step 4: Review and launch your experiment
Now, review your variations and edit them as needed:
Then, launch the experiment, as described below.
Launching an experiment for an active triggered or journey campaign
If your experiment is associated with a triggered or journey campaign that has already been activated, just click Launch Experiment:
Launching an experiment for a triggered or journey campaign that is not yet active
If your experiment is associated with a triggered or journey campaign that has not yet been activated:
Click Save & Continue, which takes you to the campaign's Review & Launch page.
When you're ready to start the campaign, click Activate Campaign. For a journey campaign, you can alternatively let a campaign visitor automatically activate the campaign (after you enable the journey).
Navigate to Messaging > Experiments and open up your experiment.
-
Click Launch Experiment:
Launching an experiment for a blast campaign
If your experiment is associated with a blast campaign:
Click Save & Continue, which takes you to the campaign's Review & Launch page.
If the campaign has been scheduled for the future, the experiment will launch at the same time.
-
Otherwise, send or schedule the campaign by clicking Send Campaign Right Now or Schedule Campaign For Later, and the experiment will launch with it:
Step 5: Select a winner (or edit the experiment)
After launching an experiment, you can check on the performance of its variations on the Experiment Analytics page. To open this view, navigate to Messaging > Experiments and click on an experiment.
From this page:
- To end the experiment and use the control for all remaining sends (after the experiment's test period), click End Experiment.
- To end the experiment and use a particular variation for all remaining sends, click Use Variation.
- To delete a poorly performing variation, click Edit and then delete variations as needed.
- To add a variation, click Edit and then Add New Variation.
Send time experiments
Send time experiments allow you to experiment with the date and time associated with a blast campaign. This can help you find a send time that maximizes conversions.
INFO
Send time experiments are not available for triggered campaigns.
For a send time experiment, you'll first configure a start date and time (including timezone) for the control variation. This is the earliest send time for your campaign, and must be in the future.
Then, you'll specify a send time for each variation (they must be at least one hour apart).
IMPORTANT
If your campaign's send lists have highly dynamic membership, launch the send time experiment as close to your campaign's first send as possible. This way, the list's membership at the experiment's start will be similar to what they were at the start of the campaign.
Comments
0 comments
Please sign in to leave a comment.