Iterable allows you to A/B test any part of your email, push, and SMS campaigns to maximize for opens, clicks, conversions, or purchases. This overview will walkthrough how to set up an experiment, edit an experiment, and select a winner.
This document covers the following topics:
Setting up an experiment
A/B tests can be launched through three different methods:
- By navigating to Messaging > Experiments
- During the third step of the campaign creation process
- From the campaign analytics page of a pre-existing campaign
For each of these approaches, our experiment creator will guide you through four steps.
Choose a campaign
Experiments can be applied to both blast campaigns and triggered campaigns. When you launch the experiment creator through the Experiments side bar, choose between Blast and Triggered and select which campaign you would like to run an experiment on. Only campaigns with the status of draft, running, or ready may be chosen for an experiment. If you want to run an experiment based on a finished campaign, you will need to create a new copy of that campaign by cloning the previous one.
If you choose launch the experiment through either of the Campaigns methods, this step will be skipped.
Set up an experiment
In this section, you can choose the parameters and conditions for your experiment.
- Specify the title of experiment for future reference.
- Select the component of your campaign you will be creating variations for testing: subject line, preheader text, from name and sender, email body, send time (blast campaigns only), or everything.
Options in this step are limited for non-email message mediums.
Decide the metric you will be optimizing: Opens, Clicks, Purchases, or Custom Conversion.
You will either see Purchases or Custom Conversion depending on your Show Revenue project settings.
- Opens will be affected by the from name, subject line, and preheader.
- Clicks, conversions and purchases will be affected by the design of the template.
Randomly Split Variations
If enabled, messages for all incoming triggered events will be split randomly and evenly based on the number of variations you create. The sends per variation will be close to equal for large sample sizes. With this setting, a winner will not be automatically chosen. You can compare the performance of the variations and decide to either remain with the control or choose a variation as the winner for subsequent sends.
Toggling this option will behave differently based on if the experiment is from a blast or triggered campaign:
For blast campaign experiments, all sends will go out at once and the experiment is finished soon after it is launched.
For triggered campaign experiments, the experiment will run indefinitely until you select a winner, either the control or one of the variations.
Blast campaigns only
This type of experiment is only available if Randomly split variations is not enabled. In this option, you can specify a Size of List to Use for Test (default is 20%) to be randomly and evenly split and sent messages based on the number of variations you created. For a specified Experiment Time Period (default is 2 hours), performance results will be captured and compared. At the end of the experiment time period, the remainder of the list will automatically be sent the highest performing variation.
Timezone-based sends will override the message delay from timed experiments. Please uncheck timezone based sends before starting any timed experiments (as a reminder, this setting is found in Schedule Campaigns).
Sends per variation
Triggered campaigns only (including workflow campaigns)
This type of experiment is only available if Randomly split variations is not enabled. For this option, you specify the minimum number of sends per message variation to be used in the exploratory phase of your experiment. The default is 1000.
When the experiment is later launched, it will run in the following progression:
In the exploratory phase, Iterable is sending the specified number of minimum sends per variation and using our multi-armed bandit algorithm to determine which variation is best.
There is a determined a winner. 90% of the send will be the winning variation and 10% to other variations. The experiment will run indefinitely, and Iterable will continue to monitor the performance of variations. If there are any performance changes over time, the winner will be updated.
An experiment is finished when you end the experiment or manually select a winning variation for triggered messages moving forward.
Send Time Experiments
Send time experiments allow you to send a blast campaign across different send times, allowing you to determine which send time results in the best conversion rates. To do that, create a blast campaign as normal, but click on Send Time when setting up your experiment.
Pick a start date and time (including timezone) for the control variation. This should be based on the earliest send time and needs to be set as a future date. You can also select the experiment to be sent in the user's time zone, if desired.
When you create new variations in the next step, each variation needs to be spaced out by a minimum of 1 hour.
- Updating the control send time using the Update Experiment Details button in the next step will clear all variations.
If your blast campaigns utilizes highly dynamic or changing lists, you will want to launch the send time experiment as close to the first send as possible. This is because Iterable will queue up all users who should receive the campaign based on your lists when you select Start Send Time Experiment... on the campaign launch page.
Create a new variation by clicking the New Variation option. Based on the Experiment With criteria you selected, you can edit the corresponding fields to make each variation unique.
To edit the template, scroll to the bottom of the template, click Save in the upper right-hand corner and then the Edit Variation in Template Editor option.
Make edits to your template and be sure to save your changes.
Return to the A/B experiment view to design another variation. Click Save & Review to proceed to the next stage.
Review and launch experiment
Here you can review all of your settings and variations. When you are ready, click Launch to begin your experiment.
If you are designing an experiment based on a campaign that is still in a draft status. Iterable will guide you to activate the campaign before you are able to launch your experiment.
Editing an experiment and selecting a winner
Once an experiment is launched, you can easily check on the status of your experiment to see how each variation is performing in the Experiment Analytics view.
If you have selected an option where Iterable detects and implements the winner, you can add or delete variations at any time during the experiment. To do so, find the Edit option.
Delete poor performing variations by clicking the X. Add variations by clicking Add New Variation.
If you select a winner for the experiment at any time, the chosen variation will be sent to the remaining subscribers after the experiment test period.
If you use the End Experiment option, the remaining sends will get the control template after the experiment test period.