Sequence A/B Tests
Use an A/B test to determine which version of a message has the best impact on your sequence.
You can create a variant for any message in a sequence. The variant is a duplicate of the original message that you can then edit, changing a single element of the message: content, delivery settings, or Channel CoordinationA group of strategies you can use to target users where they are most likely to receive your message. settings. After starting the test, you will wait till the Confidence level meets or exceeds 95%, then select the winning message. The sequence is then republished with the winning message.
Audience members who receive the variant message are randomly selected on entry to the sequence. Related events and conversions are recorded for both audiences, providing data you can use to evaluate sequence performance based on your selected metric for the test, either conversions or engagement.
You can run A/B tests and control groups concurrently.
Create an A/B test for a sequence
- You must start the sequence before you can create an A/B test.
- You cannot start an A/B test for a sequence that has unpublished changes.
From the sequence ManageA preview of the messages in a Sequence, with options for editing and testing, and for running experiments. or PerformanceA report that compares audience behavior to a Sequence’s goal. It displays performance metrics and a link to the message report for each message in the Sequence. screen:
Click Experiments in the leftside drawer.
Click Create an A/B test.
Enter a name and description, then select which message to make a variant for, and choose a primary metric for reporting.
- The Engagement metric is available for all sequences.
- You can select the Sequence conversion metric for any sequence, but the test will not be effective unless you also set up a conversion event as the sequence’s Outcome.
Click Save and continue.
Click Create variant on the A/B test summary screen.
Make a single change on either the Content or Delivery steps, or edit the Channel CoordinationA group of strategies you can use to target users where they are most likely to receive your message. setting. For channel coordination, click , make a new selection, then click Save & continue.
Tip A test with a single variable is measurable. When you make multiple changes in the variant, you will not know which change had an effect.
If your test’s primary metric is Conversions, consider editing any part of the message. For example, for a push notification, you could edit the title OR timing OR change your Channel Coordination selection.
If your test’s primary metric is Engagement, focus on what users can experience before they interact with the message. For example, for an email, you would change the subject line only.
Click the Review step and review the device preview and message summary.
Click the arrows to page through the various previews. The channel and display type dynamically update in the dropdown menu above. You can also select a preview directly from the dropdown menu.
If you would like to make further changes, return to Review when you finish editing.
You can send a test message and verify its appearance and behavior on each channel the message is configured for. The message is sent to your selected recipients immediately, and it appears as a test in Messages OverviewA view of all your project’s messages, with options for editing their settings, content, status, and more..
- Select Send Test.
- Enter at least one Named UserA customer-provided identifier used for mapping multiple devices and channels to a specific individual. or Test GroupA preview group is audience group used for previewing personalized content in the dashboard. Wherever a personalization preview is available, you can select a preview group, and its group members’ attributes will appear for any Handlebars references to attributes. You can enable any preview group as a test group so you can send test messages to its group members. These messages appear as tests in Messages Overview. and select from the results.
- Select Send.
Click Save & continue and you will then see the original and variant on the A/B test summary screen.
Click Start A/B test to make the variant available to your audience. You can also click Exit to save the test without starting it, or click Cancel to delete the test.
After starting an A/B test:
- The Experiments drawer states that an A/B test is in progress. Click View results to go to the A/B test summary screen.
- On the Manage and Performance screens:
- Editing options are no longer available, for neither the sequence nor its messages.
- The message with the variant is labeled with a green . Click to go to the A/B test summary screen.
- On the Performance screen:
- The message with the variant is labeled with a green . Click to go to the A/B test summary screen.
- Statistics are the aggregate of the variant and the original message.
Start a saved A/B test
You can start a saved A/B test from the A/B test summary screen. From the sequence ManageA preview of the messages in a Sequence, with options for editing and testing, and for running experiments. or PerformanceA report that compares audience behavior to a Sequence’s goal. It displays performance metrics and a link to the message report for each message in the Sequence. screen:
- Click in the message preview, OR click Experiments in the leftside drawer and click View detail.
- Click Start A/B test.
Select the winning message
After you start the A/B test, you will review the performance of the original message and variant and determine which (or if either) message is having the expected impact on engagement or conversion, depending on the test’s primary metric.
The A/B test summary screen displays the following information:
- Primary metric: Sequence conversion, or Engagement.
- Sample size: The number of users selected to receive the variant. The threshold is 10,000 users.
- Lift: The percent increase or decrease of your primary metric for users who have received the variant. Presented after 7 days or when the sample size of 10,000 users is reached.
- Confidence: The probability that the same results would be obtained if the test were repeated. Presented after 7 days or when the sample size of 10,000 users is reached.
- Additional statistics are displayed for the original message and variant, based on the test’s primary metric. If your primary metric is Engagement, you can see metrics per message type by selecting from the dropdown menu.
When you select a winning message, it includes the entire message as configured. You cannot select the content or settings per channel or message type.
Select a winning message after Confidence is at least 95%. The sequence will be republished with the winning message, and the A/B test will end. From the A/B test summary screen:
- Click Select a winner.
- Click Select original or Select variant, and confirm your choice.
You may want to end an A/B test early if you see a significant drop in conversions or engagement. If the drop is not significant or if it is observed early on in the test period, you may want to let the test continue, as the rate may correct itself. Another reason to end a test early is if you notice an error in your content.
To end a test early, select the original message as the winner. This effectively cancels the test.
View A/B test history
After selecting a winning message, the A/B test is added to the list of past experiments.
From the sequence ManageA preview of the messages in a Sequence, with options for editing and testing, and for running experiments. or PerformanceA report that compares audience behavior to a Sequence’s goal. It displays performance metrics and a link to the message report for each message in the Sequence. screen:
- Click Experiments in the leftside drawer.
- Click Past Experiments. Each A/B test is listed by name along with its end date. Click to go to its summary screen.
Categories