A variant that's selected by Iterable as the winner of an experiment has been found to have performed better than the other variants in the experiment as measured by opens, clicks, or custom conversions (defined during campaign setup).
This article explains how to evaluate and export the data that reflect the performance of variants that are part of an experiment. For information about the metrics that Iterable uses to select a winning variant, see Experiment Winner Selection.
To access experiment analytics, navigate to Messaging > Experiments and click the experiment you're interested in. From there, view high-level details about an experiment on the Overview tab, and more detailed performance-related information on the Performance tab.
In this article
Experiments Overview tab
The Overview tab provides a link to the campaign on which an experiment was based and summarizes certain details that might be helpful to share with senior stakeholders who want to understand how experiments are being used to improve messaging.
View details about the experiment and the associated campaign, including a contextualized banner that summarizes the state or outcome of the experiment.
View pre-selected analytics and details about each variant.
View details about the experiment's configuration.
End an experiment early without declaring a winner, perhaps because the experiment results were inconclusive, by clicking End Experiment. The control will be used for all future sends.
End an experiment early, perhaps because a variant was clearly performing better than others, by clicking Declare winner. The selected variant will be used for all future sends.
Evaluating experiment analytics (Overview tab)
In addition to the summary at the top of the page, this view shows the following metrics for each campaign in the experiment:
Sends represents the total number of messages sent for each variant. If all sends for a campaign total an amount that's less than the list size, Iterable may have skipped sending some messages for one of the reasons described in Reasons for Send Skip Events.
Unique conversion rate column reflects the success metric selected for this experiment. See Metrics Definitions for information about the metric you see.
-
Lift represents the difference in performance between a test variant and the control with respect to the success metric that was selected during campaign setup (see Experiment Winner Selection).
For example, if an experiment was set up to use
Clicks
as a success metric (measured as a unique conversion rate), a Lift value of 5.0pp indicates that the test variant's click rate (Unique Clicks
/Total Sends
) was 5% higher than the control variant.NOTE
If you select Show confidence interval, the values under Lift report a range of the values you might expect if a variant is sent to the entire user group. For example, if an experiment reports a lift of 5 percentage points (pp) ± 2 for a variant, you can expect to see a lift of 3pp to 7pp if that variant is sent to all users. These values may help you account for the natural variability in response rates you can expect to see among users in varied user groups.
Confidence represents the likelihood that a variant's message will increase (if lift is positive) or decrease (if lift is negative) the conversion rate compared to the control, based on the selected success metric. 95% is a typical industry standard for confidence, meaning that there's just a 5% chance that the observed lift is due to random chance. If your organization is less risk-averse, 90% may be fine.
Experiments Performance tab
The Performance tab allows you to dig deeper into your experiment results, providing details that may be helpful to share with a BI team or a more experienced data analytics team.
Filter variant performance by time (Experiment Period, Experiment Start to Now, or Custom Timeframe). The experiment period is selected by default, but you can expand the view to a period of your choosing. For example, if you want to see how a campaign performed after the winning variant was selected (when it sent to the users who didn't receive it during the experiment period), extend the Viewing metrics recorded value past the end of the experiment.
Review each variant's performance using a default set of funnel metrics or metrics you select.
Export experiment metrics to share with others or to analyze further. You can export all available metrics or just those you select.
Review variant performance over time for any available metric.
Customizing experiment analytics (Performance tab)
In addition to the default metrics that are displayed on the Performance tab,
other metrics may provide additional helpful insights. For example, if a variant
that tested a new subject line was selected as the winner of an experiment, the
metrics displayed by default tell you by how much it outperformed the
control in terms of conversions. But, what if you also want to verify that the
variant didn't lead users to unsubscribe? In that case, adding Total Unsubscribes
to the metrics you review on the Performance tab would be helpful.
Exporting experiment metrics
The Performance tab also provides ways to export data so you can share it with other stakeholders or tools. You can export all metrics, or just the metrics that are selected in your current view.
To export the metrics for the selected experiment in a CSV file, from the Performance tab in Experiment Analytics, click the Export current view or Export all metrics.
To pull historic experiment data across all experiments, or export multiple experiments at once, use the
GET /api/experiments/metrics
API endpoint.