You shipped your app. You picked a price — probably $9.99/month because it felt right, maybe because a competitor charged that, maybe because it was the first number you typed. And now, months later, you have no idea whether you're leaving money on the table, scaring users away, or sitting perfectly in the sweet spot.
This is the gut-feel pricing trap. The fix is systematic A/B testing. This tutorial walks you through every step of running rigorous subscription pricing experiments with RevenueCat, from forming a hypothesis to declaring a winner and iterating toward your revenue ceiling.
Prerequisites: You've integrated the RevenueCat SDK (iOS 3.5.0+, Android 3.2.0+, Flutter 1.2.0+) and your app already displays the current Offering dynamically. If it hardcodes a product ID, fix that first — see Displaying Products.
Plan requirement: RevenueCat Experiments is available on Pro & Enterprise plans. See pricing.
Why Gut-Feel Pricing Fails (and Why A/B Testing Works)
Intuition about pricing is almost always wrong. Here's why:
- Price elasticity varies wildly by audience. Fitness app users in the US may happily pay $14.99/month; productivity app users in Southeast Asia won't touch $4.99/month. You can't know without testing.
- Small changes compound. A 20% lift in Realized LTV per customer — entirely achievable through a price test — translates directly to 20% more revenue from every new user who ever installs your app.
- Conversion rate ≠ revenue. A lower price might convert more users but still generate less total revenue. An A/B test measures the full subscription lifecycle, not just the first tap.
RevenueCat Experiments solve this by randomly assigning new users to different Offerings and tracking every subscription event — trial starts, conversions, renewals, cancellations, refunds — over the full customer lifetime. Source
What RevenueCat Experiments Can Test
Experiments are built on Offerings, not raw prices — which makes them far more powerful than a simple price switcher. You can A/B test: Source
| Test Variable | Example |
|---|---|
| Price point | $9.99/mo vs $12.99/mo vs $7.99/mo |
| Trial length | No trial vs 7-day trial vs 14-day trial |
| Subscription duration | Monthly-first vs Yearly-first layout |
| Product groupings | Monthly + Annual vs Monthly + Annual + Lifetime |
| Paywall design | Layout A vs Layout B (via Offering Metadata or RevenueCat Paywalls) |
You can run up to 4 variants simultaneously (Variant A as control, B/C/D as treatments), which makes multivariate testing — e.g., testing three different price points against your current price in one experiment — entirely possible. Source
Step 1: Form a Strong Hypothesis
Every experiment should start with a written hypothesis before you touch the dashboard. This keeps you honest about what you're testing and why.
❌ Bad hypothesis:
"Let's try a higher price and see what happens."
This gives you no framework for interpreting results and no success criteria.
✅ Good hypothesis:
"I expect to increase Realized LTV per customer by 15% by raising my monthly price from $9.99 to $12.99. I hypothesize that our power-user cohort is price-inelastic enough that the revenue gain per paying customer will outweigh any drop in initial conversion rate."
✅ Great hypothesis (specific, measurable, causal):
"By adding a 7-day free trial to our existing $9.99/month product, I expect initial conversion rate to increase by 25% — because removing upfront payment risk addresses the #1 reason users cite for not subscribing — resulting in a 15% lift in Realized LTV per customer despite a potential drop in trial-to-paid conversion."
Per RevenueCat's guidance, your hypothesis should state the variable being changed and the expected outcome, ideally quantified. Source
RevenueCat's dashboard lets you save notes directly on each experiment in Markdown format — use this to record your hypothesis before you start. Source
Step 2: Understand the Offering / Package / Product Hierarchy
Before creating anything, get clear on RevenueCat's data model — it's what makes Experiments work without any app code changes. Source
Project
└── Offering (e.g., "default", "price_test_1299")
└── Package (e.g., "$rc_monthly", "$rc_annual")
└── Product (the actual App Store / Play Store SKU)
- Product — The individual SKU you create in App Store Connect or Google Play Console (e.g.,
com.myapp.monthly_1299). The price lives here. - Package — A cross-platform container that groups equivalent products. A
$rc_monthlypackage holds your iOS monthly SKU and your Android monthly SKU. - Offering — A named collection of packages shown to a user on your paywall. The SDK always returns one Offering as
currentfor any given user. - Default Offering — The Offering returned as
currentwhen no Experiment or Targeting rule applies. Source
The key insight: When an Experiment assigns a user to Variant B, RevenueCat updates what current returns for that user server-side. Your app code fetches current the same way it always has — and automatically gets the right Offering for that user's variant, with zero app changes required. Source
Create the Test Offering in the Dashboard
- Go to Product catalog → Offerings → + New
- Give it a descriptive identifier like
price_test_v1_1299(note: identifiers cannot be changed later) - Add Packages and attach the new products you created for this test
Step 3: App Store / Play Store Setup Gotchas
iOS: Use a New Subscription Group for Every Price Test
This is the most common mistake developers make. If you add a $12.99/month product to your existing Subscription Group alongside your $9.99/month product, iOS users will see both prices in their subscription management settings — which is confusing and can cause accidental multi-subscriptions.
The rule: Create a new Subscription Group in App Store Connect for each RevenueCat Offering you want to test. This ensures: Source
- Users in the test only ever see the products from their assigned group
- Upgrade/downgrade paths in iOS subscription settings are contained within their group, not leaking across test variants
iOS: New IAPs Need Apple Review
New in-app purchases must be submitted to Apple and approved before they go live in production. They will work in Sandbox and TestFlight while in Ready to Submit status, so you can test early — but plan for review time (a few hours to a few days) before launching your experiment. Source
Android: No Review Required
Play Store products don't go through a review process and can be used for testing immediately after creation. Source
App Store Product Page Caveat
The App Store automatically shows up to 10 sold products in the In-App Purchases section of your App Product Page, so some users browsing the store may see multiple price points. This is unavoidable and affects even the largest apps — it's something to be aware of, not a blocker. Source
Step 4: Configure the Experiment
Navigate to your RevenueCat Project → Experiments → + New.
Choose an Experiment Type
RevenueCat offers preset types that automatically suggest relevant metrics: Source
- Price point — Testing different price levels
- Free trial offer — Comparing trial lengths or trial presence/absence
- Introductory offer — Testing intro pricing strategies
- Subscription duration — Monthly vs yearly prominence
- Subscription ordering — Product ordering and visual hierarchy
- Paywall design — Layout and copy variations
Choosing the right preset surfaces the most relevant default metrics for your test.
Configure Variants (Up to 4)
| Field | Example |
|---|---|
| Variant A (Control) | Your current default Offering |
| Variant B (Treatment) | price_test_1299 Offering |
| Variant C (Treatment, optional) | price_test_799 Offering |
| Variant D (Treatment, optional) | price_test_trial_7day Offering |
You can also enable Placements to serve different Offerings per paywall location (e.g., onboarding vs. settings), letting you test experiences contextually rather than globally. Source
Enrollment Criteria
Only new customers are enrolled in experiments — existing customers are never re-assigned. Source
You can filter enrollment by:
| Dimension | Use Case |
|---|---|
| Country | Test a US-only price increase before rolling globally |
| App | Limit to iOS app only |
| App version | Only enroll users on a version with the new paywall UI |
| Platform | Separate iOS vs Android results |
Allocation percentage: Set the minimum at 10% of new customers. Enrolled customers are split evenly across variants. So a 2-variant test at 20% enrollment = 10% per variant. A 4-variant test at 20% enrollment = 5% per variant — which will take longer to reach significance. Source
Pro tip: Identify users before they reach your paywall (e.g., right after login/signup) to prevent a single person on two devices from being treated as two separate anonymous customers. Source
Step 5: Your App Code Needs Zero Changes
This is the magic of building your paywall dynamically. As long as you fetch the current Offering from the SDK and display it — without hardcoding product IDs — the Experiment will work automatically.
Here's the correct pattern in each platform:
Swift (iOS)
import RevenueCat
func loadPaywall() {
Purchases.shared.getOfferings { offerings, error in
if let error = error {
print("Error fetching offerings: \(error.localizedDescription)")
return
}
// Always use .current — RevenueCat automatically serves
// the correct Offering for any active Experiment variant
guard let currentOffering = offerings?.current else {
print("No current offering available")
return
}
// Iterate over available packages to build your paywall UI
for package in currentOffering.availablePackages {
let product = package.storeProduct
print("Package: \(package.packageType), Price: \(product.localizedPriceString)")
}
displayPaywall(with: currentOffering)
}
}
Kotlin (Android)
import com.revenuecat.purchases.Purchases
import com.revenuecat.purchases.PurchasesError
import com.revenuecat.purchases.interfaces.ReceiveOfferingsCallback
import com.revenuecat.purchases.models.Offerings
fun loadPaywall() {
Purchases.sharedInstance.getOfferingsWith(
onError = { error: PurchasesError ->
Log.e("Paywall", "Error fetching offerings: ${error.message}")
},
onSuccess = { offerings: Offerings ->
// Always use current — the SDK handles Experiment assignment
val currentOffering = offerings.current ?: return@getOfferingsWith
currentOffering.availablePackages.forEach { pkg ->
val product = pkg.product
Log.d("Paywall", "Package: ${pkg.packageType}, Price: ${product.price.formatted}")
}
displayPaywall(currentOffering)
}
)
}
Dart (Flutter)
import 'package:purchases_flutter/purchases_flutter.dart';
Future<void> loadPaywall() async {
try {
final offerings = await Purchases.getOfferings();
// Always reference current — experiment variant is resolved server-side
final currentOffering = offerings.current;
if (currentOffering == null) {
debugPrint('No current offering available');
return;
}
for (final package in currentOffering.availablePackages) {
debugPrint(
'Package: ${package.packageType}, '
'Price: ${package.storeProduct.priceString}',
);
}
displayPaywall(currentOffering);
} on PlatformException catch (e) {
debugPrint('Error fetching offerings: ${e.message}');
}
}
Why this works: RevenueCat resolves the correct Offering for the user's Experiment variant on the server. When your app calls getOfferings(), the SDK returns the pre-fetched result — no extra network call, no extra code path. The assignment is entirely transparent to your app. Source
⚠️ Warning: Do not pre-warm the offerings cache by calling
getOfferingsin your AndroidApplication.onCreate. This can trigger unnecessary network requests (e.g., on push notification receipt). The SDK handles pre-fetching automatically. Source
Step 6: Reading the Results
Results start appearing within 24 hours of launch. The Results page has two main views: Source
The North Star: Realized LTV per Customer
Realized LTV per customer = total revenue generated by all customers in a variant ÷ total number of customers in that variant.
This is your primary success metric because it captures the full picture: conversion rate, price, renewals, churn, and refunds — all in one number. Source
The Key Metrics
| Metric | What it tells you |
|---|---|
| Realized LTV per customer | 🌟 North star — overall revenue impact per user |
| Initial conversion rate | % of users who started any purchase (including trials) |
| Trial conversion rate | % of completed trials that became paid subscriptions |
| Conversion to paying | % of users who made at least one real payment |
| Realized LTV per paying customer | Revenue per user who actually paid — reveals price sensitivity |
The Classic Trade-Off Example
A more prominent yearly subscription offering may show lower initial conversion rate (fewer people tap "subscribe" at $79.99/year than $9.99/month) — but those who do convert generate higher Realized LTV per paying customer. The product breakdown in Results lets you isolate exactly this dynamic. Source
Example results table:
| Variant | Initial Conversion | Realized LTV / Customer | Realized LTV / Paying Customer |
|---|---|---|---|
| A (Control: $9.99/mo) | 6.1% | $3.82 | $62.65 |
| B (Test: $12.99/mo) | 4.9% ↓ | $4.41 ↑ | $90.04 ↑ |
In this scenario, Variant B wins — despite a lower conversion rate, it generates 15% more revenue per customer overall.
Statistical Confidence: Chance to Win
RevenueCat calculates Chance to Win for conversion-based metrics — the probability that a treatment variant is genuinely outperforming the control, not just getting lucky. Source
- ≥ 95% Chance to Win is the standard threshold most developers use before declaring a winner
- For high-stakes decisions (e.g., switching from in-app to web purchases), consider waiting for ≥ 99%
- For lower-stakes changes (paywall copy), 90% may be sufficient
Safety net: If your Treatment's Realized LTV is performing meaningfully worse than the Control, RevenueCat will automatically email you so you can stop the experiment early. Source
Step 7: Calling a Winner and Iterating
How to Stop and Roll Out
When you're confident in a winner, stop the experiment and choose your rollout path: Source
- Set as Default Offering — Instantly serves the winning Offering to all new users with no app update
- Create a Targeting Rule — Roll out gradually to a % of your audience, or target specific countries/platforms
- Mark winner for records — Record the decision without immediately changing anything
After stopping, results continue updating for 400 days so you can observe how Realized LTV matures as subscriptions renew and churn over the long term. Source
The Iterative Price-Finding Strategy ($5 → $7 → $8)
The goal isn't to run one test — it's to converge on the optimal price through sequential experiments:
Round 1: $9.99/mo (control) vs $12.99/mo (test) → $12.99 wins on Realized LTV per customer
Round 2: $12.99/mo (new control) vs $14.99/mo (test) vs $10.99/mo (sanity check) → $14.99 wins again — elasticity is lower than expected
Round 3: $14.99/mo (new control) vs $17.99/mo (test) → $17.99 loses — conversion rate drops too much to overcome. Optimal price found: $14.99
This binary search / bracket approach reaches the revenue-maximizing price in 3–4 experiments rather than guessing. Use the Duplicate Experiment feature (three-dot menu on any past experiment) to speed up each successive round. Source
On test duration: There's no time limit on experiments. For monthly vs yearly comparisons, consider running longer — yearly subscriptions produce high short-term revenue but monthly plans may outperform them over multi-year cohorts. Source
Common Mistakes to Avoid
1. Hardcoding product IDs in your paywall
If you display com.myapp.monthly_999 directly instead of rendering offerings.current, Experiments cannot work. Always go through .current. Source
2. Adding test products to your existing iOS Subscription Group Users will see both price points in their iOS subscription settings, creating confusion and support tickets. Create a new Subscription Group per Offering being tested. Source
3. Calling a winner too early A 70% Chance to Win sounds good — it's not. At 70%, you're wrong 3 times out of 10. Wait for 95%+ on the metrics that matter most to your business. Source
4. Forgetting that only new customers are enrolled You can't re-test on users who already saw your control. Don't launch an experiment expecting fast results if your app has low new user volume. Calculate how long it'll take to reach sufficient sample size before starting. Source
5. Not testing new products in sandbox first Always use RevenueCat's Offering Override feature to force-serve a test Offering to your own account before launching. Verify the purchase flow works end-to-end before real users see it. Source
6. Testing too many things at once without enough traffic A 4-variant test enrolling 20% of new customers means 5% per variant — which requires 4x the traffic to reach the same statistical confidence as a simple A/B test. If your app is early-stage, start with 2 variants and 100% enrollment. Source
The End-to-End Checklist
- [ ] Written hypothesis with expected outcome saved in experiment Notes
- [ ] New Products created in App Store Connect / Play Console
- [ ] iOS: New Subscription Group created for test products
- [ ] iOS: New IAPs submitted for Apple review (allow 1–3 days)
- [ ] New Offering(s) created in RevenueCat dashboard with correct Packages
- [ ] Test Offering verified via Offering Override on your own device
- [ ] Experiment configured: type preset, variants, enrollment %, criteria
- [ ] App code confirmed to use
offerings.current(not a hardcoded ID) - [ ] Experiment started; calendar reminder set to check results in 24h
- [ ] Winner declared at ≥ 95% Chance to Win
- [ ] Winning Offering set as Default (or via Targeting rule)
- [ ] Next hypothesis formed for the follow-up experiment