Scene A/B tests
Use an A/B test to determine which version of a Scene has the best impact based on your selected metric. iOS SDK 16.6+Android SDK 16.4+
About A/B tests for Scenes
Create variations of SceneA single or multi-screen in-app experience cached on users’ devices and displayed when users meet certain conditions in your app or website, such as viewing a particular screen or when a Custom Event occurs. They can be presented in fullscreen, modal, or embedded format using the default swipe/click mode or as a Story. Scenes can also contain survey questions. content by duplicating an existing Scene or creating screens from scratch. You can make a single change, such as changing a button label in a screen, or provide entirely different content.
Audience members are randomly selected and split equally to receive your control Scene (Variant A) and your variant Scene (Variant B) for the targeted audience.
Related events and conversions are recorded for both audiences, providing data you can use to evaluate Scene performance based on your selected metric.
To prepare for your tests, see About A/B testing.
Scene A/B test metrics:
Metric | Description |
---|---|
Scene completion | The user viewed all screens in the Scene. |
Push Opt-in | The user tapped a button, text, image, or screen configured with the Push Opt-in action. |
Adaptive Link | The user followed an Adaptive LinkA vendor-agnostic, shortened mobile wallet pass link that supports templates for both Google Wallet and Apple Wallet. When a user taps the link, Airship determines the user’s platform and generates the right pass for that platform. in the Scene. |
App Rating | The user tapped a button, text, image, or screen configured with the App Rating action. |
Deep Link | The user followed a deep link in the Scene. |
Preference Center | The user opened the Preference CenterA page where users can manage their opt-in statuses for the Subscription Lists in your project. Preference Centers are presented within your app or website or as an Airship-hosted web page. in your app. |
App Settings | The user opened their device's settings page for your app. |
Share | The user tapped a button, text, image, or screen configured with the Share action. |
Web Page | The user tapped a button, text, image, or screen configured with the Web Page action. |
Submit Responses | The user tapped a button, text, image, or screen configured with the Submit Responses action. |
Creating a Scene A/B test
- Go to Messages, then Messages Overview, and select the pencil icon () for a Scene.
- Go to the Content step, select Experiments in the left sidebar, and then select Create experiment. A Scene must have at least one screen configured before the Experiments option is available.
- Enter a name and description, and then choose the metric to use for reporting experiment performance.
- Check the box for Copy content from existing Scene if you want to duplicate the current Scene’s content and edit. Keep the box unchecked if you want to create a variant content from scratch.
- Select Save.
- Configure screens for variant B as you would for a new Scene See Configure Scene content.
Important Both variants must include the same action/event associated with the experiment’s primary metric. For example, if you want to use Submit Responses as your primary metric, you must configure that action for a button in both variants.
Tip - A test with a single variable is measurable. When you make multiple changes in the variant, you will not know which change had an effect.
- If your primary metric is Push Opt-in, consider testing the order of your screens so that users don’t dismiss the Scene before the request.
- If your primary metric is Scene Completion, focus on the number of screens and their content value. For example, a long Scene (more than 5 screens) will often get a lower completion rate than a shorter one.
- Select Done.
- Go to the Review step to review the device preview and Scene summary.
- Select Finish or Update to start the test. You cannot start an A/B test for a Scene that has unpublished changes.
You cannot edit a Scene’s content while an A/B test is active.
Selecting the winning variant
After starting an A/B test, compare the performance of the variants in the Scene’s Content step or in its message report to determine which (or if either) message is having the expected impact.
You may want to end an A/B test early if you see a significant drop in conversions or engagement. If the drop is not significant or if it is observed early on in the test period, you may want to let the test continue, as the rate may correct itself. Another reason to end a test early is if you notice an error in your content. To end a test early, select a winner. This effectively cancels the test.
See also Implementing A/B tests, outcomes, and compliance in About A/B testing.
After selecting a winning variant, the Scene is republished with the winner, and the A/B test ends.
- Go to Messages, then Messages Overview, and select the report icon () for your Scene.
- Select Scene Detail and compare the metrics of variants A and B.
- The default view is based on the metric selected when creating the experiment. If other applicable metrics are available, you can choose from the dropdown menu, and the displayed data will update. If not relevant to both variants, N/A appears instead of a value.
- Conversions are calculated as the number of users who performed the action defined in the primary metric divided by the number of users who entered the Scene. See Reporting for more information about individual statistics.
- Select Select as winner and confirm your choice.
Categories