Feature Flags

A Feature Flag is an experimentation tool for controlling the availability of content or functionality in your app or website. A flag’s Configurations determine the audience, schedule, and property values to apply when the flag is enabled. Flag properties enable making immediate code updates, bypassing the need for traditional code changes and release processes. ADD-ON iOS SDK 17.1+Android SDK 17.1+

About Feature Flags

The format of a Feature Flag is a conditional if statement you add to your app or website code. It contains your flag name and any properties and wraps around the code you want the flag to control. Airship provides the flag as a code snippet for your developer to add to your app or website.

Set up Feature Flag experiments in two steps:

  1. Define the flag — Set the flag’s name, description, and properties that can be used by your app or website code within the flag.

  2. Create one or more Configurations for the flag — Determine the audience, schedule, and property values for each Configuration. Configuration types:

    • A/B tests — Compare audience behaviors when a feature is hidden or present, or experiment with distinct feature experiences, such as new home screen designs, by setting different property values for each variant. Reports provide detailed data for evaluating engagement and the overall success of a feature based on your GoalsSelected events that generate a set of performance reports. You can also use them for measurement in Holdout Experiments and Feature Flag A/B tests..

    • Rollouts — Release a feature to a targeted audience and/or a percentage of an audience, then monitor interaction event counts or other concerns, such as support capacity. In addition to experimentation, you can use rollouts to present different content versions to separate audiences. For example, for a loyalty program, individual rollouts can control which content your Gold and Silver users see.

    Configurations can be open-ended or time-bound, starting immediately, ending manually, and starting or ending at a scheduled time and date. Arrange Configurations in order of priority to determine which one should be available to a user if they are included in multiple Configuration audiences. Each flag can have up to 10 active Configurations.

Manage a Configuration’s audience, schedule, and properties from the Airship dashboard. If something unexpected happens with the feature, or if you have reason to end access before its scheduled end time, you can easily disable it for all users. For apps, this means eliminating the need to release an app update and waiting for users to install the new version.

You can also use Feature Flags to determine a messaging audience or trigger automation.

 Tip

You can also create rollouts using Sequence Control Groups and Scenes.

Audience

When creating a flag Configuration, set your audience to members of a Test GroupA preview group is audience group used for previewing personalized content in the dashboard. Wherever a personalization preview is available, you can select a preview group, and its group members’ attributes will appear for any Handlebars references to attributes. You can enable any preview group as a test group so you can send test messages to its group members. These messages appear as tests in Messages Overview.. When you are ready to go live, select All Users for your entire audience or select Target Specific Users and set conditions, then set a percentage of your set audience that will be able to view the feature determined by the flag. For A/B tests, the percentage is divided evenly between variants by default, or you can set your own values. Set your audience according to the purpose of your A/B test or rollout.

Audience members are randomly selected. Any user included in the set percentage is considered eligible, meaning they have access to the feature. For A/B tests, you have the option to hide the feature from the control variant.

Setting a percentage helps you limit the audience so you can effectively manage feedback or limit exposure to potential bugs. For a rollout, gradually increase the percentage to expand your audience. For example, you could set a condition where only users who have freshly installed your app will be able to access the flagged feature. If you set a percentage of 10%, only 10% of users who meet the condition will be able to access the feature.

For flags with multiple Configurations, if a user falls into more than one Configuration’s audience, only the one with the highest priority will be active for that user. By default, each new Configuration is set to the lowest priority. See Set priority order in Manage Configurations below.

For more about audience and eligibility, see Example use case below.

Supported channels per condition:

ConditionSupported Channels
App versionApp
Device tagsApp, Web
LocaleApp, Web
Location opt-in statusApp, Web
New usersApp, Web
PlatformsApp, Web
Push opt-in statusApp, Web
SegmentsApp iOS SDK 17.6+Android SDK 17.5+

Properties

You can add properties that can be used by your app’s or website’s code within a Feature Flag, bypassing the need for traditional code changes and release processes. The flag code you pass on to your development team includes references to the properties. Once implemented, edit the flag Configuration’s properties in the dashboard to make immediate changes to your app or website, like variables that can be updated remotely. As a general example, you could create properties for a promotion’s title, description, and button URL, then change their values when the promotion ends and a new one launches. You can override flag properties per Configuration. For A/B tests, you can set property overrides for each variant.

When creating or editing a flag, set a name, type, and default value for each property. Properties can be a string, number, boolean, or JSON. You can create up to 50 properties per flag.

Properties use cases:

  • Coffee mobile ordering app — Create a flag with properties for controlling the promotions and rewards for loyalty membership. Using just the Airship dashboard, you can transition from pumpkin spice promotions to holiday themes in sync with seasonal campaigns. Celebrate special limited time milestones, such as the app’s 10th anniversary, by offering “10x rewards” points.

  • Music streaming app — Create a flag with properties to introduce a new premium subscription tier. Launch the feature to 25% of the audience, with flag properties “Price Point” and “Trial Period Duration” and quickly gauge engagement data and user feedback as users respond to the new tier. Update the properties to fine tune the subscription offer, and roll out the feature to 100% of users once you land on the right details. You can also use a “Promotional Messaging” property to periodically update the copy promoting the new subscription.

Interaction events

Track interaction with the flagged feature by generating an event from the mobile SDK. It must be explicitly called by the developer. See Tracking Interaction (Mobile) and Tracking Interaction (Web) in our platform documentation.

While it is called an “interaction” event, what you track is up to you and depends on the feature. Some examples of how to implement different use cases:

  • Tracking when a user encounters a change — For a flag that changes a button’s color from blue to green or adds a new button to a screen, track when a user visits the screen containing the button, since it is a visible change.

  • Tracking when a user interacts with a change — For a flag that changes a button’s destination, track when the user selects the button, since it is a non-visible change.

The events have a flag ID and flag name, which identify which flagged feature a user interacted with. They also have a boolean eligible field, which indicates whether or not the user was in the Feature Flag audience and had access to the feature. The variant_id is the UUID of the A/B test variant. This ID is listed for each variant in A/B test reports. See also Feature Flag Interaction Event in the Real-Time Data StreamingA service that delivers engagement events in real time via the Data Streaming API or an Airship partner integration. API reference.

Deciding what you are tracking is especially important when using the flag to trigger a message, since you can trigger based on whether or not the user is part of the Feature Flag audience.

Draft Configurations

You can add flag code to your app or website even while a Configuration is in Draft state, and then make it active later. For apps, make it active after delivering your new code to devices in an app update.

Workflow

The following is the general workflow for using Feature Flags:

  1. Create a flag in the dashboard and copy the code snippets and Mobile docs link. Code is provided for Web, Android (Kotlin and Java), iOS, Cordova, Flutter, and React Native. You can also access the code after saving.

  2. Give the code snippets and docs links to your developer so they can add the flag to your app or website.

  3. Create at least one Configuration, setting the audience to members of a Test GroupA preview group is audience group used for previewing personalized content in the dashboard. Wherever a personalization preview is available, you can select a preview group, and its group members’ attributes will appear for any Handlebars references to attributes. You can enable any preview group as a test group so you can send test messages to its group members. These messages appear as tests in Messages Overview.. For A/B tests, all variants are distributed randomly to Test Group users by default, or you can specify which variant to make available to the them.

    After you update your website with the feature and flag code, the feature or A/B test will be available to the configured audience the next time they visit the website, according to the Configuration’s schedule. For apps, the same is true after users install the version of your app that contains the updated code.

  4. After verifying the feature or A/B test works as intended with your Test Group, change the Configuration audience to All Users or Target Specific Users and set the percentage and conditions. Manage the Configuration from the Airship dashboard. Repeat this step for each Configuration.

  5. View reports and evaluate performance. For A/B tests, then roll out the winning variant to all test audience members.

  6. After the flag has served its purpose, archive it and remove the flag code from your app or website.

Rollouts

Use rollouts for experimentation and for controlling content versions for different audiences. Common use cases:

  • Resource management — Release features to segments of your audience over time to prevent a strain on resources. Increase the audience according to database query volume, support ticket volume, or limited initial product supply.
  • Content testing — Test features with a small segment of your audience before releasing the feature to a broader audience.
  • Time-limited promotions — Turn on and off time-restricted features, either manually or according to an automated schedule, such as displaying a promotional banner only during a sale weekend.
  • Premium features — Provide premium feature access to paid users only, based on membership tiers.
  • Holiday promotions — Create a flag for promotional banners in your app. Launch the banners to 100% of your U.S. audience after Thanksgiving and to 100% of the E.U. audience in early November. This method ensures that each region receives the promotion at the optimal time, maximizing engagement and driving campaign success.
  • Retail app loyalty program — Create a flag to launch a new loyalty program in your retail app. Release the program to your most loyal and lowest-tier users at different rates based on observed differences in user behavior for those audiences. You can then create individual Configurations of the Feature Flag for each audience segment and roll out the experience to 50% of your most loyal users and 10% of lowest-tier users under the same flag using different Configurations. You can also use properties to customize the promotional text for each audience and display differing content for each segment.

Rollout example implementation

The following example is for introducing a redesigned Settings screen in a mobile app. To let all new users experience the new Settings screen:

  1. Create a Feature Flag with any relevant properties and default values.
  2. Create a rollout Configuration with these Audience settings:
    1. Select Target Specific Users.
    2. Set the Configuration audience percentage to 100.
    3. Add the condition New users.
  3. In your app code, set the Feature Flag interaction event to occur when users view the Settings screen.

100% of users who have freshly installed your app will be able to see the redesigned Settings screen. They are eligible users. For each interaction event:

  • When eligible has a value of true, that means the screen was viewed by a user that is in the Configuration audiences for the Feature Flag. The user experienced the redesigned Settings screen.

  • When eligible has a value of false, the screen was viewed by a user that is not in the Configuration audiences for the Feature Flag. The user saw the old version of the Settings screen.

However, if you’re concerned about the potential for bugs in the redesigned screen, you would want to limit how many new users could see it. Keep all the settings the same except the percentage, which you would set to 10. 10% of users who have freshly installed your app will be able to see the redesigned Settings screen.

Once you determine the feature is ready for a wider audience, increase the audience percentage. Keep adjusting till you reach 100% or the acceptable threshold determined by your planning.

A/B tests

iOS SDK 19+Android SDK 19+

Use A/B tests to compare audience behaviors when a feature is hidden or present. You can also experiment by presenting different experiences by setting specific property values for each variant. The audience percentage is divided evenly between variants by default, or you can set your own values. A/B tests contain a control variant and support up to 25 additional variants.

A/B test use cases:

  • Evaluating engagement of new designs — Create an experiment to test the effectiveness of your new home screen design with new users. Display the new design to 50% of new users and the current home screen to the other 50%, set a goal such as a purchase, and track which version of the home screen leads to more conversions. If the old design still outperforms, you can stop the experiment, and if the new one wins, you can create a new rollout from the winning variant.

  • Optimizing loyalty programs — Create an experiment to test different reward structures for your new loyalty program. Create an experiment with two variations of the program: one offering discounts on future orders and another offering free delivery credits, and set a goal to track repeat orders. Reporting data reveals a 20% increase in repeat orders for the delivery credit variant, providing the team with concrete evidence to present to leadership on which program structure performs best.

To prepare for your tests, see About A/B testing.

Goals and reports

GoalsSelected events that generate a set of performance reports. You can also use them for measurement in Holdout Experiments and Feature Flag A/B tests. are the events you want to measure in your A/B tests and are required to declare a winner and generate reports. You can select from project-level Goals or create new ones. If you create Goals while setting up the A/B test, you can reuse them for other A/B test Configurations for the same flag. Maximum 10 goals per test.

You can create Goals based on Custom or Predefined Events or these Default Events:

Event nameActivity nameDefinition
App openapp_openUser opened your mobile app.
First openfirst_openUser opened your mobile app for the first time.
First seenfirst_seenUser opted in to notifications or opened your mobile app for the first time.
First opt-infirst_opt_inUser opted in to a channel for the first time. For Email (commercial), SMS, and Open channels only.
UninstalluninstallUser uninstalled your mobile app in response to a push.
Web sessionweb_sessionUser generated a web session.

Reporting does not include events attributed to Named UsersA customer-provided identifier used for mapping multiple devices and channels to a specific individual. that are not associated with a platform and Channel IDAn Airship-specific unique identifier used to address a channel instance, e.g., a smartphone, web browser, email address..

View reports to see how each variant performs. You can select each Goal to update the reports with data for that Goal only. After enough data is available and time has elapsed, Airship declares a winning variant, which you can then roll out to your entire A/B test audience.

If there is no significant difference between variant performance, you may want to consider your test variables and audience. Even with significant differences, this data can help you understand what your audience responds to.

For more information, see A/B test reports and technical overview.

Create Feature Flags

  1. Go to Experiments, then Feature Flags.
  2. Select Create Feature Flag.
  3. Configure for the flag:
    Field or sectionDescriptionSteps
    Display nameThe dashboard label for the flagEnter text.
    Flag nameThe name used for reference by the SDK. Must be unique. Automatically generated based on the display name, but you can change it. The name can contain letters, numbers, and underscores only, and it must start with a letter and end with a letter or number. You cannot change the flag name after making the flag active.Enter text.
    DescriptionDescribes what the flag controlsEnter text.
    PropertiesOptional. String, number, boolean, or JSON properties that can be used by your app or website code within the Feature Flag. 50 properties maximum.Select Add property, and then enter a name, select a type, and configure a value. Select Add property for additional properties.
  4. Select Save and continue.
  5. Copy the code snippets and docs link for your developer. The code snippet is the same in all Configurations for a flag, so you only need to provide it to your developer once.
  6. Select Close.

Your flag is now saved, and you can create a Configuration at any time.

Add events and creating Goals for A/B tests

You must add Custom and Predefined Events to your project before you can select them for Goals. You do not need to add Default Events to your project before selecting them for Goals.

If you want to use project-level Goals in an A/B test Configuration, you must first create them in your project settings. See Goals in Engagement Reports. Otherwise, you can create Goals as you create an A/B test.

Create Configurations

Set up applications for a Feature Flag. If you just created a flag, start on step 3. If you just duplicated a Configuration, start on step 4.

A/B test requirements: iOS SDK 19+Android SDK 19+

  1. Go to Experiments, then Feature Flags, and then select View to access a flag’s Configurations.

  2. Select Create Configuration and then select Feature rollout or Feature A/B test.

  3. Select Definition to continue, and then enter for the Configuration:

    FieldDescriptionSteps
    Rollout or A/B test nameThe dashboard label for the ConfigurationEnter text.
    DescriptionDescribes the purpose of the ConfigurationEnter text.
  4. (For A/B tests only) Select Goals to continue, and then search for and select Goals or create them. The winner and detailed reports do not generate without at least one Goal.

    To create a Goal, enter a Goal name in the search field, then select Create Goal and configure fields:

    FieldDescriptionSteps
    Goal nameUsed for identification within the experimentEnter text.
    DescriptionAdditional information about the GoalEnter text.
    EventThe event you want to measure in the experimentSearch for and select an event. If the event does not have a category assigned, select from the list or select Custom category and enter a category name.
    To move a secondary Goal to primary, select the drag handle icon () for a Goal, then drag and drop to the first position.

  5. Select Properties or Variants to continue, then configure property values to override the displayed defaults.

    • The Properties step and options do not appear if the flag does not contain properties.
    • Property overrides are optional and apply to the current Configuration only.

    For A/B tests, two variants appear by default: Control variant and Variant A. Select Add variant to add up to 25 variants in addition to the control. You can edit each variant’s name and property values.

    The flagged feature is available to all variants, but you can disable it for users with access to the control variant. Disable Display flagged feature for the control to experiment on the feature’s value by comparing experiences with and without it.

    Select Delete variant to remove a variant. You cannot delete the control or the last remaining additional variant.

  6. Select Audience to continue, then select and configure the audience:

    OptionDescriptionSteps
    All UsersMakes the feature or A/B test available to random users for a percentage of your total app or web audience.

    For A/B tests, the percentage is divided evenly between variants by default, or you can set your own values.

    For rollouts, set a percentage.

    For A/B tests, to override the default allocation, enable Allow uneven allocations and edit the percentage for each variant.

    Test UsersMakes the feature or A/B test available to users in a Test Group.

    For A/B tests, all variants are distributed randomly to Test Group users by default, or you can specify which variant to make available to the them.

    Select a test group.

    For A/B tests, to override the default distribution, select Specific variant only and select the control or other variant.

    Target Specific UsersMakes the feature or A/B test available to random users for a percentage of your app or website that meet specified conditions. The percentage applies to the group of users who meet the conditions.

    For A/B tests, the percentage is divided evenly between variants by default, or you can set your own values.

    Select and configure one or more conditions according to the steps in Target Specific Users: In-App Experiences. See Audience above for the list of conditions. Then set a percentage.

    For A/B tests, to override the default allocation, enable Allow uneven allocations and edit the percentage for each variant.

  7. Select Schedule to continue and then schedule the period when the Configuration will be active. For specific times and dates, also specify the time zone. The UTC conversion displays below the settings and updates as you make changes.

  8. Select Review to continue and then review your Configuration’s settings.

  9. Select Launch to make the Configuration active or Exit to save it as a draft. See the status information in Manage Configurations.

Implement the code

For information about accessing flags in the app SDKs, tracking interaction, and error handling, see Feature Flags in our Mobile platform documentation. For Web implementation, contact Support.

For information about accessing flags in the SDKs, tracking interaction, and error handling, see our platform documentation:

You can return to the dashboard to get the code snippets at any time:

  1. Go to Experiments, then Feature Flags.
  2. Select View to access a flag’s Configurations.
  3. Select </>.
  4. Copy the code snippets.
  5. Select Close.

Using Feature Flags with messaging

You can use a Configuration’s audience as the audience for an In-App AutomationMessages cached on users’ devices and displayed when users meet certain conditions within your app, such as viewing a particular screen or opening the app a certain number of times. or SceneA single or multi-screen in-app experience cached on users’ devices and displayed when users meet certain conditions in your app or website, such as viewing a particular screen or when a Custom Event occurs. They can be presented in fullscreen, modal, or embedded format using the default swipe/click mode or as a Story. Scenes can also contain survey questions.. See the Audience step in each Create guide:

You can also trigger an In-App Automation, Scene, or SequenceA series of messages that is initiated by a trigger. Airship sends messages in the series based on your timing settings, and you can also set conditions that determine its audience and continuation. Sequences can be connected to each other and to other messaging components to create continuous user experiences in a Journey. when a Feature Flag interaction event occurs. See the Feature Flag Interaction Event trigger in each Triggers guide:

Example campaign strategy

For feature rollout in an app, your developer would implement tracking when users view the screen containing the new feature. Your campaign strategy could look like this:

  1. Inform users of the new feature — Create an In-App Automation or Scene with these settings:

    • Audience: Select Feature Flag Audience and select your flag’s rollout Configuration.
    • Content: Tell your users about the feature, explain its benefits, and encourage use.
    • Behavior: Select the App Update trigger, specify the version of your app that contains the feature and flag code, and enter the number of times users must open your app before they will see your message.

    The feature will be available to the Feature Flag audience after they install the version of your app that contains the feature and flag code and according to the flag’s schedule. The message will display for the user after the number app opens you specified when setting up the trigger.

  2. Trigger a survey — Create a Scene that requests feedback from Feature Flag Audience members who have seen or interacted with the flagged feature:

    • Audience: Select Feature Flag Audience and select your flag’s rollout Configuration.
    • Content: Add questions or an NPS survey about their experience with the feature.
    • Trigger: Select the Feature Flag Interaction Event trigger (the flag you selected in the Audience step will be preselected for the trigger), select the user group Users with feature access, then enter the number of times the event must occur before the Scene is triggered.

    The Scene will display for members in any of the Configuration audiences for that flag after the number of event occurrences you specified when setting up the trigger.

Maximize adoption by designing a JourneyA continuous user experience of connected Sequences, Scenes, and/or In-App Automations. that combines the above with a SequenceA series of messages that is initiated by a trigger. Airship sends messages in the series based on your timing settings, and you can also set conditions that determine its audience and continuation. Sequences can be connected to each other and to other messaging components to create continuous user experiences in a Journey. that follows a user’s interaction with the flagged feature and sends a customized message for each key step along the way.

Manage Feature Flags

To view a list of your flags, go to Experiments, then Feature Flags. Your current flags are shown by default. Use the Current/Archived filter to update the list. The default sort order is by last modified, and each row displays:

  • Display and flag names
  • Description
  • Date modified
  • Status — Active (has at least one Active or Scheduled Configuration) or Inactive (has Draft or Ended Configurations only)
  • Number of Configurations

Manage flags by selecting an icon or link in a flag row:

OptionDescriptionSteps
Edit flagOpens the flag for editing. You can change a flag's display name, description, and properties. You can also change the flag name if the flag is not yet Active. You cannot edit archived flags. See IMPORTANT box following this table. See also Editing flag propertiesSelect the pencil icon (), make your changes, then select Save and continue.
Manage ConfigurationsOpens the list of Configurations for a flag.Select View for a flag's Configurations. See Managing Configurations.
Duplicate flag and ConfigurationsCreates a copy of the flag and all its Configurations. The display and flag names are appended with "copy". Configurations have the same names as the originals and are in Draft state.Select the duplicate icon (). You can then select the pencil icon () to edit the flag details, edit manage Configurations, or create a new Configuration.
Archive flagMoves a flag from the Current list to the Archived list. You cannot archive an Active flag. You cannot archive a flag if an active message is targeting a Configuration audience.Select the archive icon ().
Restore/Unarchive flagRestores an archived flag to your list of Current flags.Select the Archived filter, then select the archive icon () for a flag.
View and cancel related messagesOpens a list of In-App Automations and Scenes targeting any of the flag's Configuration audiences. Messages are listed by name, type, and status. Selecting a name opens the message to its Review step, where you can check for conflicts between the Configuration and message schedules.

You can cancel a single Active message or all Active messages. Canceling a message is effectively the same as setting an end date for the current date and time. See also Restart an In-App Automation or Scene in Change message status.
Select the link icon () to view the list. To cancel, select Stop for a single message or Stop all. To check for scheduling conflicts, select a message name, then see the Schedule section to compare the start and end settings.

Editing Flag properties

If a Feature Flag does not have an active or scheduled Configuration, you can edit the flag’s property names, types, and values at any time.

When editing a flag that has active or scheduled Configurations, note the following:

  • If a flag has an active or scheduled rollout or A/B test Configuration, you cannot edit the flag’s property names or types.
  • If a flag has an active or scheduled rollout Configuration, you can edit the flag’s property values at any time. The Configurations will inherit the new property value.
  • If a flag has an active or scheduled A/B test Configuration, you cannot edit the flag’s property values unless all variants have an override value set for that property.

Whenever you change property names or types at the flag level, you must update the code snippet in your app or website for changes to take effect. You do not need to update the code snippet when changing a flag’s default property values only.

Manage Configurations

To manage Configurations, go to Experiments, then Feature Flags, then select View to access a flag’s Configurations.

Active and Scheduled Configurations are listed in priority order, with the following information:

  • Priority number
  • Configuration type — Rollout or A/B test
  • Configuration name
  • Status — Active or Scheduled
  • Description
  • Goal name (for A/B test Configurations only)
  • Audience — “Test group” or percentage
  • Start and end dates and times in UTC

For Ended and Draft Configurations, use the Current/Archived filter to update the list. The default sort order is by last modified, and each row displays:

  • Configuration name
  • Configuration type — Rollout or A/B test
  • Description
  • Date modified
  • Schedule
  • Status — Draft or Ended

Manage Configurations by selecting an icon or link in a row. Select the three dots icon () for more. Options:

OptionDescriptionSteps
Set priority orderFor flags with multiple Configurations, if a user falls into more than one Configuration's audience, only the one with the highest priority will be active for that user. By default, each new Configuration is set to the lowest priority.Select the drag handle icon (), then drag and drop to a new position.
View reportsOpens reports for Active and Ended Configurations.Select the report icon (). See View reports for more information.
Edit ConfigurationOpens Active and Draft Configurations for editing.Select the pencil icon (), make your changes, then select Update or Launch in the Review step.
End A/B testOpens options for rolling out a variant or ending the test without a rollout.Select the stop icon (). See End an A/B test.
Edit audience allocationOpens the audience allocation setting for an Active Configuration. You also have the option to end the Configuration. See the description for End/Cancel Configuration in this table.Select the filter icon, set a new percentage, then select Save. To end the Configuration, select the settings icon, then select End Configuration.
Duplicate ConfigurationCreates a copy of the Configuration and opens it for editing. The Configuration name is appended with " copy".Select the duplicate icon (), and then complete the steps for creating a new Configuration.
End/Cancel ConfigurationImmediately ends an Active Configuration or cancels a Scheduled Configuration. To make it Active or Scheduled again later, you can edit the Configuration and set a new end date.Select the pencil icon (), and then Stop.
Archive ConfigurationMoves a Configuration from the Current list to the Archived list. You cannot archive an Active or Scheduled Configuration.Select the archive icon ().
Restore/Unarchive ConfigurationMoves an Archived Configuration to the list of Current Ended and Draft Configurations.Select the Archived filter, then select the archive icon () for a Configuration.
View and cancel related messagesOpens a list of In-App Automations and Scenes targeting the Configuration's audience. Messages are listed by name, type, and status. Selecting a name opens the message to its Review step, where you can check for conflicts between the Configuration and message schedules.

You can cancel a single Active message or all Active messages. Canceling a message is effectively the same as setting an end date for the current date and time. See also Restart an In-App Automation or Scene in Change message status.
Select the link icon () to view the list. To cancel, select Stop for a single message or Stop all. To check for scheduling conflicts, select a message name, then see the Schedule section to compare the start and end settings.

View reports

Flag usage data is available in the dashboard. See Viewing Feature Flag and Scene Rollout usage.

Accounts with Performance AnalyticsA customizable marketing intelligence tool that provides access to reports and graphs based on engagement data. can view reports and export data. Contact Airship Sales to add Performance Analytics to your Airship plan.

To access reports:

  1. Go to Experiments, then Feature Flags.
  2. Select View to access a flag’s Configurations.
  3. Select the report icon () for a Configuration, then view reports. See Rollout reports and A/B test reports and technical overview.

Rollout reports

The following are available for when viewing reports for rollouts:

ReportDescription
Feature Flag interactionsCounts of users in the Configuration audience with at least one interaction event and interaction events per date. The default view is the last 30 days. Use the date selector to define a different time period.
Users in Configuration audience with interaction eventsA count of users in the Configuration audience with at least one interaction event. Users are counted as Channel IDsAn Airship-specific unique identifier used to address a channel instance, e.g., a smartphone, web browser, email address..

To download the data, select the down arrow icon, select CSV or TEXT format, and then select Download. For Feature Flag interactions, the download lists user and event counts per date. For Users in Configuration audience with interaction events, the download lists the platform and Named UserA customer-provided identifier used for mapping multiple devices and channels to a specific individual. for each Channel ID.

A/B test reports and technical overview

When viewing reports for A/B tests, limited data appears if a GoalSelected events that generate a set of performance reports. You can also use them for measurement in Holdout Experiments and Feature Flag A/B tests. was not set for the test. A summary displays the status of the experiment. Reports load with data for the test’s primary Goal. If multiple Goals were set, select a different one, and the reports will reload with the data for that Goal. Select the info icon () for more information in each section.

Data represented in A/B test reports:

DataDescription
IDThis is a variant's UUID. It appears in interaction events.
Probability to Be BestThis metric represents the likelihood that a particular variant is the top performer based on your test results. The closer the probability is to 100%, the more confidence that this variant is the best choice. A value of 95% or above suggests the variant is very likely to outperform the others. Hover over a variant for additional information.
LossExpected loss quantifies the risk of making a suboptimal decision. It accounts for both the uncertainty in the A/B test results and the potential missed opportunities if another variant performs better. A higher loss value suggests a greater risk of missing out on potential conversions, while a lower loss value indicates that even if the variant isn't the absolute best, the downside of choosing it is minimal.

For example, if the variant you select to roll out turns out to not be the best one, you might lose 3% of the conversions by having selected it. So if you have a P2BB of 70% but a small loss, it might be worth it to use that variant even though P2BB might not be 95%+.

Conversion countThis is the total number of users who completed the Goal event within this variant group during the A/B test.
Conversion rate (vs Top)This shows the percentage of users who completed the Goal event, calculated as (conversion count / sample size) x 100. The comparison to the top-performing variant indicates how much lower the conversion rate is for this variant relative to the best option, where the top variant shows a difference of 0%.
Sample sizeThis represents the total number of users who triggered the interaction event in the A/B test for each variant. A larger sample size increases confidence in the results.
Posterior ProbabilityThis graph visualizes the probability distribution of conversion rates for each variant based on the test data, highlighting the range of likely performance outcomes.

  • X-Axis (Conversion Rate): Represents the posterior distribution of possible conversion rates for each variant based on the test data. It shows the range of values a variant's true conversion rate is likely to fall within, rather than just observed conversion rates.
  • Y-Axis (Probability Density): Represents the likelihood of different conversion rates occurring, given the test data. Higher peaks indicate conversion rates that are more probable, while broader distributions suggest greater uncertainty in the estimate.
  • Overlap of Distributions: If two posterior distributions overlap significantly, this indicates uncertainty about which variant is better. Minimal overlap suggests a clearer winner.
Relative UpliftThis graph shows how each variant's performance compares to the others, highlighting the percentage increase or decrease in conversions relative to the top performing variant. It provides insight into whether a variant is making a meaningful improvement or if the difference is small.

  • 0% uplift line: Represents that there is no difference between variants.
  • Distribution Spread: A wide distribution suggests uncertainty in the uplift estimate. A narrow distribution indicates more confidence.
  • Position of Bulk Mass: If most of the distribution lies above zero for a variant, then it is likely to outperform others.

As you review the report data, you may want to disable an underperforming variant. In the table, select Stop for the variant, and it will no longer be available to its configured audience.

To download table data as a CSV file, select the down arrow icon.

Statistical methods

Airship analyzes Feature Flag A/B test results using Bayesian statistics, measuring confidence in each variant’s success while accounting for uncertainty in the data. Rather than relying on a fixed confidence threshold, Bayesian methods allow for continuously updating the understanding of variant performance as data comes in.

Airship estimates probability distributions for each variant’s performance. These distributions help calculate how likely each variant is to be the best. A Beta(1,1) prior is used to create the distributions, starting with a neutral assumption and letting the data drive the results.

Instead of only comparing variants to a single control, Airship evaluates each variant against all other variants. This gives a more complete picture of which variant performs best in the test.

Benefits of using Bayesian methods:

  • Transparent decision-making — You can see whether a variant is performing better than others and the confidence in that result.
  • More than just statistical significance — Instead of a pass/fail outcome, Bayesian methods give you probability-based confidence in the results.
  • Flexibility — You can decide how much certainty you need before rolling out a winning variant.

Calculating the winning variant

After a minimum runtime of one week and for a minimum sample size of 1,000 users, Airship declares the winning variant in the dashboard when Probability to Be Best exceeds 95% and Loss remains less than 5%.

  • A one week minimum is required to ensure that results are not overly influenced by short-term anomalies such as holidays, weekend effects, or day-of-week traffic fluctuations. It provides a more stable and representative sample of user behavior.

  • A sample size of at least 1,000 users per variant is required to ensure enough data is collected to provide statistically meaningful insights. This threshold helps avoid results that are skewed by randomness or small sample bias, leading to more reliable conclusions.

  • A Probability to Be Best of at least 95% provides strong statistical evidence that the winning variant outperforms all other variants.

  • An expected loss of less than 5% is required to ensure the winning variant is unlikely to perform significantly worse than others, minimizing risk and providing confidence in its effectiveness.

End an A/B test

You can end an active A/B test at any time.

From the A/B test report:

  1. Select End A/B test.
  2. Select an option to determine what will happen with the variants after ending the test:
    OptionDescription
    <Any variant>Create a rollout Configuration for the variant that will be allocated to 100% of the A/B test audience. All other variants will no longer be available to their configured audiences.
    Stop all variantsNo variants will be available to their configured audiences.
  3. Confirm your selection.

You can also end the experiment by selecting Stop in the list of Configurations or by selecting Roll out for a variant listed in the table:

Once a winner has been determined, you will see an option to create a rollout for it in the report summary and table. Select Roll out winner and confirm your choice. The rollout will be allocated to 100% of the A/B test audience, and all other variants will no longer be available to their configured audiences.

To download the displayed test results in a CSV file, select Download data. Change your Goal selection to download results for that Goal. The following data is listed per Channel IDAn Airship-specific unique identifier used to address a channel instance, e.g., a smartphone, web browser, email address.: