A/B Testing
Run statistically rigorous pricing experiments in your Roblox game — no 60K-transaction minimum required.
Overview
Baseplate A/B Testing lets you run gamepass and developer-product pricing experiments directly inside any Roblox experience. Roblox's native A/B testing tools require a minimum of 60,000 transactions before results are available — far out of reach for most games. Baseplate uses a Bayesian Beta-Binomial model that produces actionable results with as few as 30 participants per variant.
How it works
- Define experiments in your server Script with variant names and values (e.g. prices).
- When a player joins, call
GetVariant— the module assigns a variant deterministically using a hash of the player's UserId and the experiment name. The same player always sees the same price. - When the player purchases, call
TrackPurchase. Session-end events fire automatically. - Events are batched in memory and flushed to the Baseplate API every 10 seconds (or at 20 events). Failed batches retry up to 3 times — A/B testing never crashes your game.
- Visit the Baseplate Dashboard to see real-time conversion rates, 95% credible intervals, and the probability each variant is the best.
Architecture
[Roblox Game Server]
│
▼
BaseplateAB.luau ──POST /api/events──▶ Cloudflare Worker (ab.baseplate-site.pages.dev)
│
▼
Baseplate Dashboard
(Bayesian analysis in real time)
Installation
1. Add the module to your game
Copy BaseplateAB.luau into your game — for example
ServerScriptService/Modules/BaseplateAB.
It is a single-file, zero-dependency module.
2. Get your API key
- Log in to the Baseplate Dashboard at
https://dashboard.baseplate-site.pages.dev. - Navigate to Settings → API Keys.
- Click Create Key. Copy the key (starts with
bp_live_) — it is shown only once.
3. Enable HttpService
In Roblox Studio: Game Settings → Security → Allow HTTP Requests → ON.
The module needs HttpService to send events to the Baseplate API. If HttpService is disabled the module
logs a warning and silently no-ops — your game still works, but no data is collected.
Quick Start
A minimal server Script that tests two prices for a sword pack:
-- ServerScriptService/ABTestScript (Script)
local BaseplateAB = require(game.ServerScriptService.Modules.BaseplateAB)
local Players = game:GetService("Players")
-- 1. Initialize with one experiment and two price variants
BaseplateAB:Init({
apiKey = "bp_live_abc123", -- replace with your real key
apiUrl = "https://baseplate-ab.baseplate-rblx.workers.dev", -- Baseplate API endpoint
experiments = {
{
name = "sword_pack_price",
variants = {
{ name = "low", value = 99 },
{ name = "high", value = 199 },
},
},
},
})
-- 2. When a player joins, get their variant and show the price
Players.PlayerAdded:Connect(function(player)
local variant = BaseplateAB:GetVariant("sword_pack_price", player)
print(player.Name .. " sees sword pack at " .. tostring(variant.value) .. " Robux")
-- Use variant.value to set the price in your shop UI
end)
-- 3. When the player purchases, track the conversion
-- (hook this into your MarketplaceService callback)
local function onSwordPackPurchased(player)
BaseplateAB:TrackPurchase("sword_pack_price", player)
end
-- Session-end events fire automatically on PlayerRemoving — no extra code needed.
Luau API Reference
BaseplateAB:Init(config: InitConfig)
Initialize the module. Call exactly once from a server Script at startup.
Calling Init a second time throws an error.
InitConfig
| Field | Type | Description |
|---|---|---|
apiKey |
string |
Your Baseplate API key (starts with bp_live_). |
apiUrl |
string |
Baseplate API endpoint, e.g. "https://baseplate-ab.baseplate-rblx.workers.dev". Trailing slash is stripped automatically. |
experiments |
{ ExperimentConfig } |
One or more experiment definitions. At least one is required. |
ExperimentConfig
| Field | Type | Description |
|---|---|---|
name |
string |
Unique experiment identifier (e.g. "vip_pass_price"). |
variants |
{ Variant } |
Two or more variants. Each has a name (string) and value (number). |
Variant
| Field | Type | Description |
|---|---|---|
name | string | Human-readable label (e.g. "low", "control"). |
value | number | The price or numeric value for this variant. |
Example
BaseplateAB:Init({
apiKey = "bp_live_abc123",
apiUrl = "https://baseplate-ab.baseplate-rblx.workers.dev",
experiments = {
{
name = "sword_pack_price",
variants = {
{ name = "low", value = 99 },
{ name = "mid", value = 149 },
{ name = "high", value = 199 },
},
},
{
name = "vip_pass_price",
variants = {
{ name = "standard", value = 299 },
{ name = "premium", value = 499 },
},
},
},
})
BaseplateAB:GetVariant(experimentName: string, player: Player) → Variant
Returns the Variant assigned to the given player for the named experiment.
Assignment is deterministic — the same UserId + experimentName
always produces the same variant. No randomness is involved.
On first call for a player, an assignment event is queued to the API.
Subsequent calls return the cached result without re-sending.
Parameters
| Param | Type | Description |
|---|---|---|
experimentName | string | Must match a name in your experiments config. |
player | Player | The Roblox Player instance. |
Returns
Variant — a table with name: string and value: number.
Example
local variant = BaseplateAB:GetVariant("sword_pack_price", player)
print(variant.name) -- "low", "mid", or "high"
print(variant.value) -- 99, 149, or 199
Fallback behaviour: If experimentName doesn't match any configured experiment,
the module logs a warning and returns the first variant of the first experiment — your game never breaks.
BaseplateAB:TrackPurchase(experimentName: string, player: Player)
Records a purchase conversion event for the player's assigned variant.
Call this when the player successfully buys the product being tested.
Important: GetVariant must have been called for this player and
experiment first — otherwise the module logs a warning and no event is sent.
Parameters
| Param | Type | Description |
|---|---|---|
experimentName | string | The experiment to record the purchase for. |
player | Player | The Roblox Player instance who made the purchase. |
Example
local MarketplaceService = game:GetService("MarketplaceService")
MarketplaceService.PromptGamePassPurchaseFinished:Connect(function(player, gamePassId, wasPurchased)
if wasPurchased and gamePassId == SWORD_PACK_GAMEPASS_ID then
BaseplateAB:TrackPurchase("sword_pack_price", player)
end
end)
BaseplateAB:TrackSessionEnd(player: Player)
Fires a session_end event for every experiment the player is
enrolled in. This is handled automatically by the module on
Players.PlayerRemoving, so you typically don't need to call it yourself.
Use it explicitly only if you need to mark a session boundary at a point other than player removal (e.g., the player leaves a specific game area that is being tested).
Parameters
| Param | Type | Description |
|---|---|---|
player | Player | The Roblox Player instance. |
Example
-- Usually not needed — the module handles this automatically.
-- Explicit call for a custom session boundary:
BaseplateAB:TrackSessionEnd(player)
Event payload format
Events are sent as JSON to POST {apiUrl}/api/events:
{
"apiKey": "bp_live_abc123",
"events": [
{
"experimentName": "sword_pack_price",
"variantName": "mid",
"playerId": "12345678",
"eventType": "assignment",
"timestamp": 1712700000
},
{
"experimentName": "sword_pack_price",
"variantName": "mid",
"playerId": "12345678",
"eventType": "purchase",
"timestamp": 1712700045
}
]
}
Event types: assignment, purchase, session_end.
Dashboard Guide
Logging in
Navigate to dashboard.baseplate-site.pages.dev and sign in with your Baseplate account. Your API key is tied to your account — all experiments created with that key appear automatically.
Creating an experiment
- Click New Experiment.
- Enter a name that matches the
namefield in your Luau config (e.g.sword_pack_price). - Add variants with the same names and values you defined in code.
- Click Create. The experiment starts collecting data as soon as your game server sends its first events.
Alternatively, experiments can be created via the REST API:
POST /api/experiments with a Bearer token.
Reading results
Click an experiment to see its results page. You'll find:
| Column | Meaning |
|---|---|
| Variant | The variant name (e.g. "low", "high"). |
| Participants | Number of unique players assigned to this variant. |
| Conversions | Number of purchase events recorded. |
| Conversion Rate | Conversions ÷ Participants. |
| 95% Credible Interval | The range within which the true conversion rate lies with 95% probability. |
| P(Best) | The probability this variant has the highest true conversion rate of all variants. |
Interpreting confidence indicators
- P(Best) > 95% — Strong evidence this variant is the best. Safe to ship.
- P(Best) 80–95% — Good signal, but consider running longer for more confidence.
- P(Best) < 80% — Not enough evidence yet. Keep collecting data.
- "Insufficient data" flag — Shown when a variant has fewer than 30 participants. Results are unreliable at this stage.
Sample API response
Results are also available via GET /api/experiments/{id}/results:
{
"experiment": {
"id": "exp_01",
"name": "sword_pack_price",
"status": "running"
},
"variants": [
{
"variantName": "low",
"assignments": 512,
"conversions": 45,
"conversionRate": 0.0879,
"credibleInterval": [0.065, 0.114],
"probabilityBest": 0.82,
"insufficientData": false
},
{
"variantName": "high",
"assignments": 498,
"conversions": 31,
"conversionRate": 0.0622,
"credibleInterval": [0.043, 0.086],
"probabilityBest": 0.18,
"insufficientData": false
}
],
"totalParticipants": 1010,
"totalConversions": 76
}
Statistical Methodology
Why Bayesian?
Traditional (frequentist) A/B testing answers: "If there were no real difference, how likely is the data I observed?" — the p-value. This is confusing and requires large sample sizes. Bayesian analysis answers the question you actually care about: "Given the data I've observed, what is the probability that variant A is better than variant B?"
The Bayesian approach is especially well-suited for Roblox games because:
- Results are interpretable at any sample size — you don't need to wait for a fixed N.
- It produces direct probability statements ("82% chance this variant is the best") instead of indirect p-values.
- It degrades gracefully — with little data you get wide intervals and low confidence; with lots of data you get tight intervals and high confidence.
The Beta-Binomial model
Each variant's unknown true conversion rate θ is modelled as a Beta distribution. The Beta distribution is the natural choice because it models probabilities (values between 0 and 1) and is the conjugate prior for Binomial data — meaning the math stays clean.
We start with a uniform prior: Beta(1, 1), which says "before seeing any data, all conversion rates are equally likely." After observing data:
α = 1 + conversions
β = 1 + (assignments − conversions)
This gives the posterior distribution — our updated belief about the true conversion rate after seeing the data.
What "Probability Best" means
We draw 10,000 Monte Carlo samples from each variant's posterior Beta distribution. For each draw, we check which variant has the highest sampled conversion rate. The fraction of draws where a variant wins is its probability of being best (P(Best)).
For example, if variant "low" wins in 8,200 out of 10,000 draws, its P(Best) is 0.82 (82%).
What credible intervals are
A 95% credible interval is the range [lo, hi] such that there is a 95% probability the true conversion rate lies within it. This is computed by taking the 2.5th and 97.5th percentiles of the Monte Carlo samples.
Unlike frequentist confidence intervals, a Bayesian credible interval means exactly what it sounds like: "There is a 95% probability the true value is in this range."
Minimum sample size guidance
- 30 participants per variant — the minimum before results are shown.
Below this threshold, variants are flagged as
insufficientData: trueand results should not be acted on. - 100–200 per variant — enough for a rough signal on which variant is leading.
- 500+ per variant — credible intervals tighten significantly. If P(Best) > 95% at this point, you can confidently ship the winner.
- No hard upper limit — the Bayesian model keeps updating. There's no "peeking problem" like in frequentist testing. You can check results at any time without inflating error rates.
FAQ
Will this slow my game?
No. Variant assignment is a pure math operation (a hash — no HTTP call). Events are queued in
memory and flushed in a background coroutine every 10 seconds or when the queue hits 20 events.
All HTTP calls use task.spawn so they never block the game thread. If the API is
unreachable, events are retried up to 3 times and then silently dropped — your game is never affected.
What if my server can't reach the API?
The module detects if HttpService is disabled or if requests fail and logs a warning. Your game continues running normally. Variant assignment still works (it's local math), but no events are collected. Once connectivity is restored, new events will be sent. Previously dropped events are lost.
How long should I run an experiment?
There's no fixed duration — check the dashboard for P(Best). As a rule of thumb:
- Wait until each variant has at least 30 participants (the "insufficient data" flag disappears).
- For a confident decision, aim for P(Best) > 95% on the winning variant.
- For most Roblox games, this means a few days to two weeks depending on traffic.
- There's no peeking penalty — you can check results at any time without invalidating the experiment.
Can players see different prices?
Yes — that's the point. Each player is deterministically assigned to one variant, so they always see the same price. Different players may see different prices. This is standard practice for pricing experiments and is how Roblox's own A/B testing works.
Tip: Set the variant values in your server Script and communicate prices to the client via RemoteEvents or UI data. Never trust the client to pick a price.
Does the same player always get the same variant?
Yes. Assignment uses a deterministic hash: (UserId × 31 + hash(experimentName)) % variantCount.
There is no randomness — a player who re-joins, switches servers, or comes back days later
always sees the same variant for the same experiment.
Can I run multiple experiments at once?
Yes. Pass multiple experiment configs to Init. Each experiment assigns variants
independently — a player can be in "low" for one experiment and "premium" for another.
What event types are tracked?
Three types:
assignment— sent automatically whenGetVariantis first called for a player.purchase— sent when you callTrackPurchase.session_end— sent automatically onPlayerRemoving, or manually viaTrackSessionEnd.