After you create a new experiment and add variants to it, configure the settings that control what portion of your user list to include in an experiment and how to measure variant performance to pick a winning variant.
WARNING
If you're running an experiment with a campaign that also has STO enabled, ensure that the experiment duration is longer than the STO window and that the STO window minus the experiment window is at least six hours in duration.
For example, you should not specify an STO window of eight hours and an experiment window of three hours, because it doesn't satisfy the six hour requirement.
# In this article
# Configure experiments that are not send time
Configure these settings from the Variant setup page.
-
(Optional, blast campaigns only) Choose Add holdout group to set aside a group of users who won't receive the campaign, so you can compare their conversion performance to those who receive it.
If you add a holdout group, specify these additional settings:
Set Holdout group size to the percentage of users who won't receive the campaign, including variants.
Set Attribution window to the number of hours that conversions by users in the holdout group should be attributed to this experiment. After this period, conversions for members of the holdout group will no longer count towards overall performance.
See Holdout group considerations for more information.
-
For Test strategy:
Choose Test with all users to send it to 100% of your user list, with all variants randomly distributed equally within the test group.
-
Choose Test with a subset of users and optimize to test with a portion of the audience before selecting the best performing variant, which Iterable then sends to all remaining users.
See Test strategy considerations for information about the differences between blast and triggered campaigns.
For Winning metric, choose the metric Iterable should use to measure variant performance, and thereby, select a winning variant. See Winning metric considerations for more information.
-
For Test duration (blast campaigns only), specify how long Iterable should wait after sending to the test group before selecting a winning variant. After this period of time, Iterable selects a winner for the campaign's remaining messages. Be sure that the time you specify gives campaign recipients enough time to interact with it.
IMPORTANT
If you're using this option, make sure not to schedule your campaign with respect to recipient time zones. Iterable needs to be able to select a winner at the end of the experiment's time period.
Click Save and Exit. You'll return to the Campaign window.
In the Campaign window, verify that the status listed under the campaign name at the top of the page is Draft. If your status is Scheduled, you're adding an experiment to a campaign that's already scheduled to send, and you don't need to complete the next step.
-
Click Review, then click either Send Now or Schedule to pick a time to send the campaign.
Your experiment will launch when the campaign starts being sent.
# Configure send time experiments
You configured Start date and time when you set up your campaign variants. Now, configure the following settings for your send time experiment.
Complete these steps from the Variant setup page.
For Winning metric, choose the metric Iterable should use to measure variant performance, and thereby, select a winning variant. See Winning metric considerations for more information.
For Start timezone, provide the timezone that will be applied to the send time.
Choose Send in user’s timezone to send the experiment in the user’s timezone. If the user doesn't have a time zone specified, the experiment is sent based on the start time zone. If the campaign associated with the experiment is set up to use a recipient's timezone, be sure to select this setting.
Click Save and Exit. You'll return to the Campaign window.
In the Campaign window, verify that the status listed under the campaign name at the top of the page is Draft. If your status is Scheduled, you're adding an experiment to a campaign that's already scheduled to send, and you don't need to complete the next step.
-
Click Review, then click Start Send Time Experiment and confirm that you want to Send Campaign.
Your campaign and experiment will be sent based on the send times you specified.
# Additional configuration considerations
# Holdout group considerations
Keep these considerations in mind when using holdout groups in your experiments:
A holdout group's attribution window can differ from a campaign's attribution window.
Users in a holdout group receive a Send Skip event with
eventName
set toSendSkip
,campaignId
set to your campaign's ID, andreason
set toExperimentHoldOut
.To add a holdout group to an experiment, you must be experimenting with a campaign that includes a custom conversion event.
Holdout groups aren't available when you experiment with send time or Send Time Optimization.
# Test strategy considerations
Testing with all users
For triggered and journey campaigns, as users get triggered into the campaign, Iterable randomly assigns them to an experiment variant.
For blast campaigns, test variants are used for 100% of the campaign send list, with sends of each variant distributed evenly among all users.
In either case, the experiment will run until you manually end the experiment or declare a winning variant.
Testing with a subset of users
For triggered and journey campaigns, specify the Minimum sends per variant. As users get triggered into the campaign, Iterable randomly assigns them to an experiment variant. Once each variant has been sent the specified minimum number of times, Iterable determines the best performing variant and declares it as the current winner for the experiment. Iterable continues to monitor the experiment to determine if user preferences change, and a new winning variant should be declared. To learn more, see Triggered campaign experiments
-
For blast campaigns, specify the Test group size (default, 20%). With this strategy, Iterable runs the experiment on a subset of the audience for the amount of time you define (default, 2 hours) before automatically selecting a winning variant and sending that variant to the remaining users (optimized group). For example, an experiment with the control and a single variant reflects the following distribution.
TIP
For help deciding the best size for the test group, read Planning an Experiment.
Alternatively, you can end the experiment or declare a winning variant. See Manually selected experiment winners for more info.
# Winning metric considerations
Iterable uses the metric you specify as the winning metric to measure variant performance, and thereby, select a winning variant. Iterable selects the winner by dividing the following metrics by the total number of messages sent:
-
Opens -
Unique Opens or Clicks
/Total Sends
-
Clicks -
Unique Clicks
/Total Sends
-
Custom conversions -
Unique Conversions
(for the chosen custom event) /Total Sends
. To optimize on purchase, select theiterable_purchase
event.
See Metrics Definitions for information about these metrics.
NOTES
Custom conversions appear only if the selected campaign has conversion events.
If you're using a custom conversion event as a success metric for your experiment and you've chosen to attribute conversion by
campaignId
in the campaign setup, be sure that you're passingcampaignID
andtemplateID
to Iterable so the user action that the custom event represents is attributed to the correct campaign and variant.
# Send time considerations
Keep these considerations in mind for send time variants.
You can't choose to send variants a minimum number of times or to a subset of users.
The control variant must be the earliest send time.
All send times must be in the future and must be at least an hour apart.
If your campaign's send lists have highly dynamic membership, use a send time that's as close to your campaign's first send as possible so the list's membership at the experiment's start is similar to the start of the campaign.
# Next steps
See Launching an Experiment if your experiment uses a triggered campaign and you're not sure whether it needs to be launched.