Communicating with your users is a tricky business. There are so many variables involved in creating the right message, at the right time, for the right audience. Fortunately, Iterable experiments can help you determine, with confidence, which of your ideas users respond to the most and which versions of a campaign truly contribute the most to conversions.
This article explains how to experiment with Iterable campaigns so you can create experiences that users respond to favorably, while promoting the outcomes you seek.
In this article
Experimentation overview
A/B testing is the type of experimentation used in Iterable. With this type of experimentation, you apply a specific change to a variation of an original (control) campaign, then run both campaigns to test which is most effective.
Let's consider a case where you've created a welcome campaign that new users really respond to. You don't have a lot of time to craft new messaging, but wonder whether simply adding emojis to the subject line might lead to even more opens.
To test whether users respond favorably to this change before implementing it for all users who receive your welcome campaign, set up an experiment to send the original campaign to one group of users and a variation of the campaign with an emoji in the subject line to another group. After the campaigns run for awhile, evaluate the impact of the change by reviewing the performance metrics on the Experiment Analytics page. If there are more opens for the variation than the original campaign, consider using it as your new welcome message.
Use Iterable experiments to:
Test the effect of certain parts of a message (such as from name, subject line, or preheader) on opens and open rates.
Find a send time that maximizes opens and conversions.
Evaluate the potential to improve conversions with updates to a message's body.
Assess the impact of Send Time Optimization on opens and conversions.
Compare conversion rates for a campaign to those of users who don't receive it (a holdout group).
Create a variation of a campaign that you can use for future sends if it outperforms your original control campaign.
Required permissions
When working with experiments, you must have the following permissions:
To view the settings for your Iterable project, you need the Project Configuration > Project Settings permission.
To create campaigns and experiments, you need the Workflows, Campaigns & Experiments > Draft permission.
To schedule or activate campaigns, you need the Workflows, Campaigns & Experiments > Activate & Manage permission.
Terms you should know
As you work wih experiments in Iterable, understanding these terms will be helpful:
Term | What it means |
---|---|
Confidence | The likelihood that the difference in conversion rates between a given variation and the control isn't due to chance. |
Control campaign or template | The original campaign or template on which you base a variation. |
Holdout group | A group of users you exclude from a campaign, used to test the results of sending a campaign compared with not sending it. |
Improvement score | The percentage of improvement a campaign variation produced compared to the control campaign. |
Improvement interval | Provided with the Improvement score, this reflects the conversion rate you can expect (based on the selected metric) if a variation is sent to all users. For example, if an experiment has an open rate of 50% for a subset of users, ± 2 means that you can expect a 48% to 52% improvement in opens if the selected variation is sent to all users. |
Variation | A version of the campaign you want to use for A/B testing. |
Comments
0 comments
Please sign in to leave a comment.