A campaign that's selected by Iterable as the winner of an experiment has been found to have performed better than the control campaign as measured by opens, clicks, or custom conversions (defined when the campaign was set up). The measurement of this improvement is reflected in the Improvement column in Experiment Analytics.
Here, you can also view a Confidence score, which indicates the likelihood that improvement is not due to chance. This score tends to rise with increased send rates (a larger audience) and with higher rates of improvement.
In this article
Metrics used to select a winning variation
When you're setting up an experiment, the value you choose from Select winner based on is used to determine the Improvement score for a campaign variant. The resulting data points are calculated with the following metrics:
-
Opens -
Unique Opens or Clicks
/Total Sends
-
Clicks -
Unique Clicks
/Total Sends
-
Custom conversions -
Unique Conversions
/Total Sends
, whereUnique Conversions
represents a selected custom event (such as theiterable_purchase
event to optimize on purchase).
Choosing an experiment winner
In some cases, Iterable chooses a winner for an experiment based on criteria you've defined. Other times, you might have to manually choose a winner yourself.
Iterable-selected experiment winners
When you set up an experiment to send variations to a subset of users rather than to a whole user list, Iterable selects the winner after enough messages are sent to satisfy the criteria you defined. The winner will have an Improvement score in Experiment Analytics that shows by how much it outperformed the control campaign.
Once Iterable selects a winning variant, it uses it for all future sends of an active campaign. This variant also becomes the new template for the associated campaign, so will be used if a completed campaign is relaunched in the future.
NOTE
When selecting a winning variant for an experiment, Iterable doesn't consider the Confidence score.
Manually-selected experiment winners
There are a couple of cases when you might manually select a winning variation for an experiment.
When you set up an experiment to randomly send variations to an entire user list, you must manually select a winner.
It's generally not a good idea to stop experiments early. If you must do so, first consider whether you should select a winning variation.
In either case, you can manually select a winning variation on the Experiment Analytics page, based on the scores you see in the Improvement column.
Currently winning vs Winner status
When an experiment for a trigger campaign is configured to have Iterable select a winner, Iterable uses a multi-armed bandit approach that assigns users to the control campaign or to a test variant campaign, round-robin style. This assignment continues until the minimum number of sends per variant is met. Iterable then identifies the best performing variant and optimizes the experiment by sending that currently winning variant to 90% of your campaign’s future sends, with the other variants being sent to the remaining 10% for continued testing.
Iterable continues to monitor variants for the duration of the experiment, selecting a new winner if new user preferences surface. If a new winning variant is found, it becomes the current winner, and 90% of new users will start to receive that variant.
During this phase of the experiment, a variation reflects the "Winner found" status on the Experiment Index page and a "Currently winning" status on the Experiment Details page. This status doesn't necessarily reflect that the variant is going to be the overall winner, but that it is the most recently selected winner in the multi-armed bandit flow. Once the experiment finishes, and a winner is selected, the statuses are updated to "Finished" and "Winner," respectively.
Exporting experiment metrics
You can export metrics for a single experiment or for all or multiple experiments.
To export the metrics for the selected experiment in a CSV file, on the Experiment Analytics page, click the Export Table to CSV button at the bottom right of the table. The data for the selected experiment is now available in the exported file.
-
In some cases, you might want to pull historic experiment data across all experiments, or export multiple experiments at once.
To do that, use the
GET /api/experiments/metrics
API endpoint.
Comments
0 comments
Please sign in to leave a comment.