After you create a new experiment, the Experiment Setup page appears, where you can define what you're experimenting with, the portion of your user list to include in an experiment, and how to pick a winner.
The options in this article are for an email experiment. The available options vary, depending on the campaign's channel type.
In this article
Experiment setup options
On the Experiment Setup page:
Experiment name
Provide a name that makes this experiment easy to search for and identify among others.
Experiment with
Select the option you chose when you planned your experiment (see Planning an Experiment). This option is used for all of the variants you create for a given experiment.
NOTES
Send Time experiments allow you to test different send times, and are available only for blast campaigns. They are sent to an entire user list, with all variants distributed equally.
-
Send Time Optimization experiments allow you to test the effectiveness of campaigns that are send using the STO feature. They are available only for blast campaigns and require no setup. STO experiments are automatically sent to half of the user list at the optimized time, and to the other half at the campaign's configured send time. You can't choose a winner for an STO experiment.
If Send Time Optimization isn't visible, it's because your Iterable project doesn't have enough historical user engagement data to calculate optimized send times.
Experiment type (send time)
For Send Time experiments, specify when to send the control campaign for the experiment. Variations are sent to your entire user list, with all variants randomly distributed equally. You can't choose to send variants a minimum number of times or to a subset of users.
If you choose Send in user's time zone, and the user doesn't have a time zone specified, the experiment is sent based on the campaign's time zone.
TIP
If your campaign's send lists have highly dynamic membership, use a send time that's as close to your campaign's first send as possible so the list's membership at the experiment's start is similar to the start of the campaign.
Experiment type (not send time)
For all other experiment types, use Randomly split variants to specify what portion of your user list to include in an experiment. The control campaign and variants are sent in roughly equal proportions whether you set the option to yes or no.
-
To send to an entire user list, set Randomly split variants to Yes.
With this setting, blast campaign experiments end when all campaign messages are sent. Triggered campaign experiments run until you select a winner. With this setting, you must select a winner, Iterable doesn't pick one for you.
-
To send to a subset of users before sending to the whole list, set Randomly split variants to No.
For triggered and journey campaigns:
- Sends per variant defines the minimum number of times Iterable sends each variant (round robin) before selecting a winner. When a winner is picked, it's used for 90% of the campaign's future sends. Other variants are used for the remaining 10%, in equal distribution. Iterable monitors variants on an ongoing basis, changing the distribution of sends until the experiment completes.
For blast campaigns:
Percentage of list to include in experiment defines the percentage of a send list that Iterable should send this experiment to.
-
Select winning experiment variant after specifies how long Iterable sends campaign messages before selecting a winning variant. After this period of time, Iterable selects a winner for the campaign's remaining messages. Be sure the time you specify gives campaign recipients enough time to interact with it.
IMPORTANT
If you're using this option, make sure not to schedule your campaign with resepect to recipient time zones. Iterable needs to be able to select a winner at the end of the experiment's time period.
Holdout Groups
Set Holdout Groups to Yes to set aside a portion of a send list to whom Iterable doesn't send a blast campaign, but for whom conversions are still tracked. This allows you to compare the conversion performance of users who don't receive a campaign to the conversion performance of those who do.
-
Percentage of the list specifies what percentage of users won't receive the campaign, including variants. Instead, these users each receive a Send Skip event with:
-
eventName
set toemailSendSkip
,pushSendSkip
,smsSendSkip
,inAppSendSkip
, orwebPushSendSkip
-
campaignId
set to your campaign's ID -
reason
set toExperimentHoldOut
-
-
Attribute all custom conversion events specifies how many hours Iterable should attribute custom conversions by users in the holdout group to this experiment. After this period, conversions for members of the holdout group will no longer count towards the group's overall performance.
A holdout group's attribution window can differ from a campaign's attribution window.
NOTES
To add a holdout group to an experiment, you must be experimenting with a campaign that includes a conversion event.
Holdout groups aren't available when you experiment with Send Time or Send Time Optimization, and are available only for blast campaigns (not triggered or journey).
Select winner based on
Under Select winner based on, choose the metric to use to measure the best performing experiment.
Iterable selects the winner by dividing the following metrics by the total number of messages sent:
-
Opens -
Unique Opens or Clicks
/Total Sends
-
Clicks -
Unique Clicks
/Total Sends
-
Custom conversions -
Unique Conversions
(for the chosen custom event) /Total Sends
. To optimize on purchase, select theiterable_purchase
event.
See Metrics Definitions for information about these metrics.
NOTES
Custom conversions appears only if the selected campaign has conversion
events. If you're using a custom conversion event as a success metric for your
experiment, be sure that you're passing campaignID
and templateID
to
Iterable so the user action that the custom event represents is attributed to
the correct campaign.
Refer to Experiment Summary beneath your selection to see how winning variants and experiment variants will be distributed.
Next steps
When you're done, click Create (in the upper-right corner). You'll be taken to the Create variants page so you can finish defining your experiment.
See Creating Variants and Launching an Experiment for next steps.
Comments
0 comments
Please sign in to leave a comment.