Scene A/B Tests

Use an A/B test to determine which version of a scene has the best impact based on your selected metric. iOS SDK 16.6+Android SDK 16.4+

You can make a single change, e.g., changing a button label in a screen, or provide entirely different content for a scene. You can create the variant from scratch or by editing a duplicate of the scene.

Audience members are randomly selected and split equally to receive your control scene (Variant A) and your variant scene (Variant B) for the targeted audience. Related events and conversions are recorded for both audiences, providing data you can use to evaluate scene performance based on your selected metric for the test:

  • Scene Completion — The user viewed all screens in the scene.
  • Push Opt-in — The user tapped a button configured with the Push Opt-in action.
  • Adaptive Link — The user followed an adaptive link in the scene.
  • App Rating — The user tapped a button configured with the App Rating action.
  • Deep Link — The user followed a deep link in the scene.
  • Preference Center — The user opened the preference center in your app.
  • App Settings — The user opened their device’s settings page for your app.
  • Share — The user tapped a button configured with the Share action.
  • Web Page — The user tapped a button configured with the Web Page action.
  • Submit Responses — The user tapped a button configured with the Submit Responses action.

Create an A/B test for a scene

 Note

  • You must create the scene before you can create the A/B test.
  • You cannot start an A/B test for a scene that has unpublished changes.

  1. Go to Messages » Messages Overview and click for a scene.

  2. Go to the Content step and click Experiments in the leftside drawer.

  3. Click Create variant.

  4. Enter a name and description, and choose the metric to use for calculating the experiment.

  5. Check Copy content from existing scene if you want to duplicate the scene and edit. Uncheck if you want to create a variant scene from scratch.

  6. Click Save. On the next screen you’ll configure Variant B.

  7. Make your changes to Variant B screens. Configure screens as you would for a new scene.

     Important

    Both variants must include the same action/event associated with the experiment’s primary metric. For example, if you want to use Submit Responses as your primary metric, you must configure that action for a button in both variants.

     Tip

    • A test with a single variable is measurable. When you make multiple changes in the variant, you will not know which change had an effect.

    • If your primary metric is Push Opt-in, consider testing the order of your screens so that users don’t dismiss the scene before the request.

    • If your primary metric is Scene Completion, focus on the number of screens and their content value. For example, a long scene (more than 5 screens) will often get a lower completion rate than a shorter one.

  8. Click Done.

  9. Go to the Review step to review the device preview and scene summary.

  10. Click Finish to start the test.

After starting an A/B test:

  • The Experiments drawer in the Content step states that an A/B test is ongoing.
  • Editing options are no longer available for the scene’s content.

Select the winning message

After you start the A/B test, you can compare the performance of the variants in the scene’s Content step or in its report to determine which (or if either) message is having the expected impact.

When you select a winning scene, the scene is republished with the winner and the A/B test ends.

  1. Go to Messages » Messages Overview and click for your scene.
  2. Compare the metrics of variants A and B.
    • The default view is based on the metric selected when creating the experiment. If other applicable metrics are available, you can select from the dropdown menu, and the displayed data will update. If not applicable to both variants, you will see N/A instead of a value.
    • Conversions is the number of users who performed the action defined in the primary metric / number of users who entered the scene. See Reporting for more information about individual statistics.
  3. Click Select as winner and confirm your choice.
 Tip

You may want to end an A/B test early if you see a significant drop in conversions or engagement. If the drop is not significant or if it is observed early on in the test period, you may want to let the test continue, as the rate may correct itself. Another reason to end a test early is if you notice an error in your content.

To end a test early, select a winner. This effectively cancels the test.