A/B Testing
Introduction
A/B tests are a fundamental tool for optimizing game features by comparing different variants and analyzing their impact on player behavior and game performance. By running A/B tests, you can make data-driven decisions to enhance monetization, engagement, and overall player satisfaction.
What is an A/B Test?
An A/B test involves dividing users into different groups, each exposed to a distinct variant of a feature. By comparing the performance metrics across these groups, you can determine which variant performs better and make informed decisions based on the results.
A/B Test Analysis
We analyze A/B test results using a self-hosted version of GrowthBook. While GrowthBook is used for analysis, Firebase is primarily used for feature flagging. Firebase allows us to launch A/B tests and create conditions for user assignment. Although Firebase’s A/B testing functionality is limited, we utilize our comprehensive analytics data to create monetization, ad, and engagement metrics and validate the results effectively.
Experiment Naming Convention
To ensure consistency and facilitate easier analysis across different games, use the following naming convention for your experiments:
{game_code}_{platform}_{test.name}
- game_code: Three-letter game code (e.g., FNI, SBP, FMT). If the game doesn’t have a three-letter code, use the game name:
game.that.doesnt.have.code
. - platform:
ios
for iOSand
for Androidall
for tests run on both platforms simultaneously (start and end dates should be equal).
- test.name: Any descriptive name for the test. Use dots (
.
) instead of spaces for multiple words.
Available Function
AbCohort
The AbCohort
function assigns users to a test cohort and adds the AbCohort
parameter to all subsequent events. This function should be called every session when users log in.
Critical Parameters
- Required
experiment_name
: The name of the experiment (e.g.,abc_and_new.inter.timer
).experiment_cohort
: The name of the variant or cohort within the experiment (e.g.,aggressive
,baseline
,passive
).
Implementation
- Function Usage: Call the
AbCohort
function to assign users to a test cohort. This function will add theAbCohort
parameter to all subsequent events. - Global Parameter: The
AbCohort
parameter will be included in the payload of all subsequent events after it is set. - Clearing Parameters: Developers must clear the
ab_cohort
parameter withClearAbCohort
when the test is finished to prevent stale data from affecting subsequent analyses.
Example
To illustrate the proper setup and naming convention for an A/B test, consider the following example:
Experiment Setup
An anonymous game (game code ABC
) runs an experiment to evaluate the impact of different interstitial ad timings. The experiment is named abc_and_new.inter.timer
, with the following variants and configurations:
Variants | inter_between_time | inter_start_time |
---|---|---|
|
90 | 90 |
|
60 | 60 |
|
120 | 120 |
Each variant received different values for the experiment’s inter_between_time
and inter_start_time
remote configs.
✅ Correct Function Usage
Call the ab_cohort
function based on the user’s allocation to the variant:
LionAnalytics.AbCohort("abc_and_new.inter.timer", "baseline")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "aggressive")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "passive")
❌ Incorrect Function Usage
Avoid calling the function with individual configuration values:
LionAnalytics.AbCohort("abc_and_new.inter.timer", "60")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "90")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "120")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "60")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "90")
LionAnalytics.AbCohort("abc_and_new.inter.timer", "120")