In some cases, Iterable chooses a winner for an experiment based on criteria you've defined. Other times, you have to manually choose a winner yourself. Once a winner is declared, it can't be changed.
This article explains how a winning variant is selected for an experiment.
In this article
Metrics used to select a winning variant
When you're setting up an experiment, the value you choose from Select winner based on is used to determine the lift for a test variant relative to the control. The resulting data points are calculated with the following metrics:
-
Opens -
Unique Opens or Clicks
/Total Sends
-
Clicks -
Unique Clicks
/Total Sends
-
Custom conversions -
Unique Conversions
/Total Sends
, whereUnique Conversions
represents a selected custom event (such as theiterable_purchase
event to optimize on purchase).
Iterable-selected experiment winners
When you set up an experiment to send variations to a subset of users rather than to a whole user list, the way that Iterable selects a winning variant and sends the campaign to remaining users varies, depending on whether the campaign is a blast campaign, or a triggered campaign.
Whether an experiment includes blast or triggered campaigns:
The lift value in Experiment Analytics shows by how many percentage points the winning variant outperformed the control campaign.
Iterable doesn't consider the confidence value when selecting a winning variant for an experiment.
Blast campaign experiments
Iterable sends the winning variant for all future sends of an active campaign after a user-specified amount of time has passed. This variant becomes the new template for the associated campaign.
Triggered campaign experiments
In the first phase of the experiment, Iterable randomly assigns users triggered into the campaign to either the control or a test variant. This assignment continues until the minimum number of sends per variant is met. Iterable then identifies the best performing variant and optimizes the campaign by sending the currently winning variant to 90% of your campaign’s future sends, with the other variants being sent to the remaining 10% for continued testing.
Iterable continues to monitor variants for the duration of the experiment, selecting a new winner if new user preferences surface. If a new winning variant is found, it becomes the current winner, and 90% of new users will start to receive that variant.
During this phase of the experiment, a variation reflects the "Winner found" status on the Experiment Index page and a "Currently winning" status on the Experiment Details page. This status doesn't necessarily reflect that the variant is going to be the overall winner, but that it is the most recently selected winner in the multi-armed bandit flow. Once the experiment finishes, and a winner is selected, the statuses are updated to "Finished" and "Winner," respectively.
Manually-selected experiment winners
In cases where Iterable will select a winner for you, you can manually select a winner, instead. Reasons that you might manually select a winner and end an experiment early include:
If you set up an experiment to randomly send variations to an entire user list, you must manually select a winner.
A variant is significantly outperforming others, and you want start using it right away.
If you decide to manually select a winning variant, click Declare winner next to the variant in Overview tab in Experiment Analytics. This action ends the experiment before the criteria that was defined during set up is satisfied.
If you don't declare a winner, but end an experiment early, the control is sent to all remaining users.
Next steps
Now that you understand how a winning campaign is selected, learn about the metrics that contributed to the selection process and how to interpret an experiment's performance metrics in Experiment Analytics.