In some cases, Iterable chooses a winner for an experiment based on criteria you've defined. Other times, you might have to manually choose a winner yourself.
This article explains how a winning variant is selected for an experiment.
In this article
Metrics used to select a winning variant
When you're setting up an experiment, the value you choose from Select winner based on is used to determine the lift for a test variant relative to the control. The resulting data points are calculated with the following metrics:
Unique Opens or Clicks/
Custom conversions -
Total Sends, where
Unique Conversionsrepresents a selected custom event (such as the
iterable_purchaseevent to optimize on purchase).
Iterable-selected experiment winners
When you set up an experiment to send variations to a subset of users rather than to a whole user list, the way that Iterable selects a winning variant and sends the campaign to remaining users varies, depending on whether the campaign is a blast campaign, or a triggered campaign.
Whether an experiment includes blast or triggered campaigns:
The lift value in Experiment Analytics shows by how many percentage points the winning variant outperformed the control campaign.
Iterable doesn't consider lift value when selecting a winning variant for an experiment.
Blast campaign experiments
Iterable sends the winning variant for all future sends of an active campaign after a user-specified amount of time has passed. This variant becomes the new template for the associated campaign.
Triggered campaign experiments
Iterable sends the "currently" winning variant throughout the duration of the experiment, using a multi-armed bandit approach that assigns users to the control campaign or to a test variant campaign, round-robin style. This assignment continues until the minimum number of sends per variant is met. Iterable then identifies the best performing variant and optimizes the campaign by sending the currently winning variant to 90% of your campaign’s future sends, with the other variants being sent to the remaining 10% for continued testing.
Iterable continues to monitor variants for the duration of the experiment, selecting a new winner if new user preferences surface. If a new winning variant is found, it becomes the current winner, and 90% of new users will start to receive that variant.
During this phase of the experiment, a variation reflects the "Winner found" status on the Experiment Index page and a "Currently winning" status on the Experiment Details page. This status doesn't necessarily reflect that the variant is going to be the overall winner, but that it is the most recently selected winner in the multi-armed bandit flow. Once the experiment finishes, and a winner is selected, the statuses are updated to "Finished" and "Winner," respectively.
Manually-selected experiment winners
Only experiments that use triggered campaigns can be manually selected as a winner, and it's generally not a good idea to do so.
Reasons that you might manually select a winner and end an experiment include:
If you set up an experiment to randomly send variations to an entire user list, you must manually select a winner.
A variant is significantly outperforming others, and you want start using it right away.
If you decide to manually select a winning variant, click Declare winner next to the variant in Overview tab in Experiment Analytics. This action ends the experiment before the criteria that was defined during set up is satisfied.
If you don't declare a winner, but end an experiment early, the control is sent to all remaining users.
Now that you understand how a winning campaign is selected, learn about the metrics that contributed to the selection process and how to interpret an experiment's performance metrics in Experiment Analytics.