For your experiments to be as effective as possible, there are a few things to think about before you create them.
In this article
Consider your goal
Your goal for an experiment affects everything about its design: the campaign you send, the variations you create, and the way you interpret the results.
To get clarity about what you want to achieve, define an objective for the experiment and a hypothesis against which you can measure the benefit of a given variation.
For example, if you want to increase opens, your hypothesis might be, "Messages with emojis are opened more frequently than those without them." When the experiment ends, compare your hypothesis to the variation's performance to see if your hypothesis was correct.
TIPS
After defining an objective and hypothesis for your experiment, write them down. This way, you can refer to them when you review the experiment's performance.
If there's a minimum level of improvement you expect to see, it can be helpful to include it in your hypothesis, as it provides a metric against which you can measure success. The previous example might read, "Messages with emojis are opened 25% more often than those without them." Variations that meet or exceed the stated metric might be candidates to replace the control campaign.
Select a campaign
The campaign you choose as a baseline should align with the objective of the experiment. For example, if you're experimenting with ways to increase first-time purchases, you could experiment with a variation of a welcome campaign that expands the existing message with a discount on a user's first purchase.
When choosing a campaign, consider whether you want to improve the performance of a campaign that's running, or test different approaches for a new campaign. Both are possible—we've got you covered!
Recurring campaigns
Creating an experiment from a recurring campaign does not create a recurring experiment. If you create an experiment for a recurring campaign, the experiment will run for that specific campaign, but not any of the subsequent ones.
Experiments for any campaign of a recurring series (parent or child) must be created before the campaign is sent, and will only apply to the first campaign send. If you’re setting up a recurring campaign that you intend to experiment with, be sure to give yourself enough time to set up the experiment (and any related templates) by scheduling the campaign to be created well ahead of the scheduled send time.
Recipient timezone-based campaigns
If you're creating a time-based experiment using a campaign that's set up to use a recipient's timezone, be sure to also select Send in user's time zone when you're setting up the experiment.
Decide what to experiment with
Depending on your objective, consider the following general guidelines when deciding what to experiment with.
NOTE
The options available when setting up an experiment depend on your campaign's message medium. The following options are available for email campaigns.
To improve email opens and open rates...
Experiment with parts of an email including from name and sender, subject line, preheader text, and send times.
Choose from these email fields if you're testing opens and open rates:
From and sender name - Usually the first field a user sees, so it's important that it creates the right impression.
Subject line - A primary point of user focus that affects whether they open the message or not. To influence opens, this is a good place to start.
Preheader text - Text that follows your subject line can be an easy place to add value. Use it to extend a message, or to add something new.
Send time - One of the most important contributors to opens, be sure to send your message at a date and time that's appropriate for the majority of your users. It might be helpful to create more than a single variation when testing send time.
See Campaign Variation Types for some suggestions on creating effective variations.
To test improvements to clicks, purchases, or custom events...
Create an experiment that tests variations of the messages body, which is the part of your message that most influences user actions like clicks, purchases, or custom events (including things like clicking to your website, signing up for a service, or even unsubscribing). It's a great place to start experimenting when you want to influence a specific conversion metric.
Experimenting with send times is another useful test when assessing impact to clicks, purchases, and other user events.
See Campaign Variation Types for some suggestions on the types of changes you might consider including in a variation.
To evaluate Send Time Optimization on open rates...
Create a Send Time Optimization experiment. STO experiments use STO-optimized times for half of an email or push notification blast campaign's recipients, and the campaign's configured send time for everyone else.
See Creating Campaign Variations and Launching Experiments for information about creating this type of experiment.
NOTE
Triggered email and push notification campaigns, and Send Email and Send Push journey tiles, can't use Send Time Optimization experiments.
To assess the impact of sending a campaign...
Set up one of the types of experiments mentioned in this article with a holdout group—a group of users who won't receive the campaign, but for whom Iterable still tracks conversions. Then, compare the holdout group's conversion rate to a campaign's conversion rate to evaluate the campaign's effectiveness.
See Setting up an Experiment for some suggestions on creating holdout groups.
NOTE
Holdout groups can only be used in experiments associated with blast campaigns. They can't be used with triggered or journey campaigns.
Pick start and stop times
You should generally send users experimental campaigns during periods of time where user activity is average. Sending variations of an important campaign during a busy time of year might impact your business goals or skew experiment results. For example, you might not want to experiment with a campaign that generates revenue during a busy holiday shopping time.
Be sure to let an experiment run long enough that users have time to engage with it and to ensure that you have enough days worth of data to analyze its results.
API considerations
When experimenting with a campaign that uses the
POST /api/events/track
and
POST /api/commerce/trackPurchase
endpoints, remember that:
When possible, Iterable automatically attributes an incoming purchase or custom conversion event to a campaign, including experiment variations, when it has an active attribution period.
When you call Iterable's API, be sure to provide the
templateId
, in addition to thecampaignId
. Iterable uses thetemplateId
field to attribute a purchase or custom event to the right campaign variation, because each variation in an experiment has a uniquetemplateId
.Iterable associates an incoming purchase or custom conversion event with an active holdout group based on the user and the holdout group's attribution period.
For more tips on tracking and attribution, see our guides on Tracking Custom Conversions and Tracking Purchases and Revenue.
NOTE
If you're using a purchase or custom conversion event, also be sure that your project settings allow and save that event. For more information, see Managing Custom Events.
Want to learn more?
Check out a great blog, Successful Split Tests: 4 Steps to Effective Email Experiments, by Kath Pay, for some pointers and best practices on setting up an effective A/B test in Iterable.
Comments
0 comments
Please sign in to leave a comment.