Iterable allows you to A/B test any part of your email, push, and SMS campaigns to maximize for opens, clicks, conversions, or purchases. This overview will walkthrough how to setup an experiment, edit an experiment, and select a winner.
A/B tests can a be launched through three different methods:
- In “Experiments” found on the side menu
- In “Campaigns” through Stage 3 of the campaign creation process
- In “Campaigns” by opening up a pre-existing campaign and choosing “Create A/B Test” in the Campaign Analytics page
For each of these approaches, our experiment creator will guide you through four steps.
Step One: Choose a Campaign
Experiments can be applied to both Blast Campaigns and Trigger Campaigns. When you launch the experiment creator through the “Experiments” side bar, choose between “Blast” and “Triggered” and select which campaign you would like to run an experiment on. Only campaigns with the status of “draft”, “running”, or “ready” may be chosen for an experiment. If you want to run an experiment based on a “finished” campaign, you will need to create a new copy of that campaign by cloning the previous one.
Note: If you choose launch the experiment through either of the “Campaigns” methods, this step will be skipped.
Step Two: Set Up Your Experiment
In this section, you can choose the parameters and conditions for your experiment.
1. Experiment name:
Specify the title of experiment for future reference.
2. Experiment with:
Select the component of your campaign you will be creating variations for testing: subject line, preheader text, from name and sender, email body or everything.
Note that options in this step are limited for non-email message mediums.
Decide the metric you will be optimizing: Opens, Clicks, Purchases, or Custom Conversion.
You will either see Purchases or Custom Conversion depending on your “Show Revenue” project settings.
- Opens will be affected by the from name, subject line and preheader.
- Clicks, conversions and purchases will be affected by the design of the template.
4. Randomly Split Variations: If toggled, messages for all incoming triggered events will be split randomly and evenly based on the number of variations you create. The sends per variation will be close to equal for large sample sizes. With this setting, a winner will not be automatically chosen. You can compare the performance of the variations and decide to either remain with the control or choose a variation as the winner for subsequent sends.
Note: Toggling this option will behave differently based on if the experiment is from a Blast or Triggered Campaign.
- For Blast Campaign experiments, all sends will go out at once and the experiment is finished soon after it is launched.
- For Triggered Campaign experiments, the experiment will run indefinitely until you select a winner, either the control or one of the variations.
5a) Timed Experiment (Blast Campaigns Only)
This type of experiment is only available if “Randomly split variations’” is not toggled. In this option, you can specify a Size of List to Use for Test (default is 20%) to be randomly and evenly split and sent messages based on the number of variations you created. For a specified Experiment Time Period (default is 2 hours), performance results will be captured and compared. At the end of the experiment time period, the remainder of the list will automatically be sent the highest performing variation.
IMPORTANT: Timezone based sends will override the message delay from Timed Experiments. Please uncheck timezone based sends before starting any timed experiments (as a reminder, this setting is found in “Schedule Campaigns”).
5b) Sends per Variation (Triggered Campaigns Only)
This type of experiment is only available if “Randomly split variations” is not toggled. For this option, you specify the minimum number of sends per message variation to be used in the exploratory phase of your experiment. The default is 1,000.
Note: When the experiment is later launched, it will run in the following progression:
- In the exploratory phase, Iterable is sending the specified number of minimum sends per variation and using our multi-armed bandit algorithm to determine which variation is best.
- There is a determined a winner. 90% of the send will be the winning variation and 10% to other variations. Experiment will run indefinitely, and Iterable will continue to monitor the performance of variations. If there are any performance changes over time, the winner will be updated.
- An experiment is finished when you end the experiment or manually select a winning variation for triggered messages moving forward.
Step Three: Create Variations
Create a new variation by clicking the ‘New Variation’ option. Based on the “Experiment With” criteria you selected, you can edit the corresponding fields to make each variation unique.
To edit the template, scroll to the bottom of the template, click “Save” in the upper righthand corner and then the “Edit Variation in Template Editor” option.
Make edits to your template and be sure to save your changes.
Return to the A/B experiment view to design another variation. Click “Save & Review” to proceed to the next stage.
Stage Four: Review and Launch Experiment
Here you can review all of your settings and variations. When you are ready, click “Launch” to begin your experiment.
Note: If you are designing an experiment based on a campaign that is still in a draft status. Iterable will guide you to activate the campaign before you are able to launch your experiment.
Once an experiment is launched, you can easily check on the status of your experiment to see how each variation is performing in the Experiment Analytics view.
If you have selected an option where Iterable detects and implements the winner, you can add or delete variations at any time during the experiment. To do so, find the “Edit” option.
Delete poor performing variations by clicking the "X". Add variations by clicking "Add New Variation".
If you select a winner for the experiment at any time, the chosen variation will be sent to the remaining subscribers after the experiment test period.
If you use the 'End Experiment' option, the remaining sends will get the control template after the experiment test period.