You shipped, users signed up, and some converted. But you haven't changed your pricing since launch — you picked a number that felt reasonable and moved on. If that sounds familiar, you're almost certainly leaving money on the table.

Pricing isn't a launch decision. It's a continuous lever. This tutorial walks you through the specific RevenueCat tools and charts that help you diagnose pricing problems, run scientifically valid tests, and personalize offers for your best (and most at-risk) segments — without needing a growth team or an MBA.


The Four Pricing Levers That Actually Move the Needle

Before touching any dashboard, understand what you can change. Every pricing experiment you run should isolate one of these variables:

Lever What it controls Primary metric to watch
Price point Raw dollar amount Realized LTV per customer
Trial length 0, 3, 7, 14, or 30 days Trial Conversion Rate
Annual vs. monthly mix Which plan is defaulted/promoted MRR, Realized LTV per paying customer
Introductory offer Discounted first period pricing Initial Conversion Rate

The key insight is that these levers interact: a lower price may increase Initial Conversion but decrease Realized LTV per paying customer. You need to watch both ends of the funnel simultaneously — which is exactly what RevenueCat's charts are designed to help you do.


Step 1: Diagnose Before You Guess — Reading the Three Core Charts

Before running any experiment, spend 20 minutes in your RevenueCat dashboard with these three charts. They tell you which lever to pull first.

Chart 1: Initial Conversion Rate

What it measures: The percentage of new customers who start any subscription or trial within a defined conversion window (e.g., 7 days of first opening your app). Formula: Initial Conversions / New Customers. Source

How to read it: This chart is cohorted by a customer's first-seen date, so you're comparing apples to apples across time periods. A declining trend here usually means your paywall copy, price point, or trial offer isn't resonating with incoming users. Source

What to do with it: - Segment by Country, App Version, or Offering to isolate where conversion drops off. - If conversion is low but your store reviews are good, your pricing is likely the issue, not your product — the value proposition isn't matching the ask. - A low initial conversion + low trial start rate = test adding or extending a free trial.

Chart 2: Trial Conversion Rate

What it measures: Of trials started in a period, how many converted to paid. Formula: Conversions / Trial Starts. Source

How to read it: Cohorted by trial start date. Critically: recent periods will show as incomplete because those trials haven't had a chance to convert yet. Don't panic at the end of your date range — wait for the cohort to mature. Source

What to do with it: - Segment by Product Duration to compare trial conversion on monthly vs. annual. Annual often converts at a higher rate because the commitment-per-day is lower. - A high trial start rate + low trial conversion = users aren't experiencing enough value during the trial. This is a product problem, but you can partially address it by shortening the trial (creates urgency) or adjusting the post-trial price.

Chart 3: Realized LTV per Customer

What it measures: Total actual revenue generated (minus refunds) by a cohort of new customers, divided by the number of customers in that cohort. This is your ARPU, calculated on real revenue, not projected. Source

How to read it: Use the Customer Lifetime selector to define the measurement window (e.g., 30 days, 90 days). Comparing cohorts at the same Customer Lifetime setting is critical — a cohort with a 90-day window will always look better than a 30-day window, not because it's performing better, but because it's had more time. Source

The diagnosis combo: Low Initial Conversion → pricing/paywall problem. Healthy Initial Conversion + low Realized LTV → churn problem. Low Trial Conversion + healthy Realized LTV per paying customer → your paid product is great, but your trial experience needs work.


Step 2: Form a Hypothesis, Then Run an Experiment

Once you know which lever to pull, don't just change production pricing — test it. RevenueCat Experiments lets you A/B test (or multi-variant test up to 4 variants) any aspect of your Offerings: price, trial length, product mix, subscription duration. Source

Writing a Good Hypothesis

Don't start an experiment without one. A good hypothesis example:

"By adding a 7-day free trial to our annual plan (currently no trial), I expect to increase Initial Conversion Rate by 15%, resulting in higher Realized LTV per customer at a 30-day lifetime, because the trial removes the risk of a large upfront annual commitment."

The specificity matters: it forces you to decide which metric is your primary signal before you see the data, avoiding post-hoc cherry-picking. Source

Prerequisite: Dynamic Offerings in Your App

Experiments work server-side — no app update required if your paywall already fetches and displays the current Offering dynamically. Source

Here's the critical pattern in Swift (iOS):

// Fetch offerings and always display the `current` one.
// RevenueCat updates `current` server-side to match
// whichever experiment variant a user is enrolled in.
Purchases.shared.getOfferings { offerings, error in
    guard let offering = offerings?.current else { return }

    // Build your paywall from `offering.availablePackages`
    // Do NOT hardcode product IDs — render from the offering dynamically
    for package in offering.availablePackages {
        let product = package.storeProduct
        print("\(product.localizedTitle): \(product.localizedPriceString)")
    }
}

And the equivalent in Kotlin (Android):

Purchases.sharedInstance.getOfferingsWith(
    onError = { error -> /* handle error */ },
    onSuccess = { offerings ->
        val current = offerings.current ?: return@getOfferingsWith
        // Render packages dynamically from `current`
        current.availablePackages.forEach { pkg ->
            val product = pkg.product
            Log.d("Paywall", "${product.title}: ${product.price.formatted}")
        }
    }
)

⚠️ Never hardcode product IDs in your paywall UI. If you do, Experiments can't serve different variants to different users. Source

Setting Up the Experiment

  1. Navigate to Experiments in your Project sidebar in the RevenueCat dashboard.
  2. Create a new Offering for each variant. For App Store apps, create each tested price point in its own Subscription Group in App Store Connect — this prevents users from seeing competitor prices in their subscription settings. Source
  3. Select an experiment type preset (e.g., "Price point", "Free trial offer") to get suggested default metrics pre-configured. Source
  4. Optionally scope enrollment to specific countries, app versions, or a percentage of new users (minimum 10%). Source

⚠️ Only new customers are enrolled in experiments. Existing users will continue to see your current/default Offering. Source

Multi-variant testing example: Instead of running sequential A/B tests, test multiple hypotheses at once:

Variant Setup
A (Control) $9.99/month, no trial
B $7.99/month, no trial
C $12.99/month, no trial
D $9.99/month, 7-day free trial

This approach is faster but requires more traffic to reach statistical significance. Source

Reading Experiment Results

Navigate to the Results tab. Look at three key metrics in the Full Report: Source

  1. Initial Conversion Rate — did more users start a trial or subscription?
  2. Trial Conversion Rate — of those who started trials, did more pay?
  3. Realized LTV per customer — did the variant generate more revenue per enrolled user?

Use Chance to Win as your statistical confidence signal. Most developers consider 95% sufficient to declare a winner, but for high-stakes changes (like shifting from monthly to annual prominence), you may want to wait for 98%. Source

The annual vs. monthly trade-off is real: A variant that promotes annual subscriptions may show lower initial conversion (bigger ask) but dramatically higher Realized LTV per paying customer. Use the product-level breakdown in results to see this split clearly. Source


Step 3: Personalized Pricing with Targeting Rules

One price for all users is a blunt instrument. RevenueCat Targeting lets you serve different Offerings to different audience segments — without any code changes. Source

How Targeting Works

When a user opens your app: 1. RevenueCat SDK fetches Offerings. 2. The current Offering returned is determined by the first Targeting Rule the user matches, assessed top-to-bottom. 3. If they match no rule, they get your Default Offering. Source

Real-World Targeting Use Cases

Paid acquisition recovery: Show a higher-priced Offering to users from paid UA channels to recover ad spend. Source

Country-based pricing: Create a rule for Tier 2/3 markets (e.g., Brazil, India, Southeast Asia) and assign an Offering with region-appropriate pricing.

Onboarding survey responses: Segment users based on their stated use case. A power user who says "professional use" on your onboarding survey can be shown a higher-value (higher-priced) Offering.

Implementing Custom Attributes for Targeting

Set custom attributes in your SDK to power your targeting rules:

// Swift — set attributes based on onboarding survey
Purchases.shared.setAttributes([
    "use_case": "professional",
    "team_size": "10-50",
    "acquisition_source": "paid_search"
])

// After setting attributes that affect paywall targeting,
// call syncAttributesAndOfferingsIfNeeded() to refresh
// the offering in the same session
Purchases.shared.syncAttributesAndOfferingsIfNeeded { offerings, error in
    // Offering is now updated to reflect the new attributes
}
// Kotlin — same pattern on Android
Purchases.sharedInstance.setAttributes(
    mapOf(
        "use_case" to "professional",
        "team_size" to "10-50"
    )
)

⚠️ syncAttributesAndOfferingsIfNeeded() has a rate limit of 5 calls per minute. Call it once after you've set all attributes relevant to that session's paywall. Source

Scheduling rules: You can also schedule Targeting Rules for time-limited promotions (e.g., a Black Friday price) with a Start Time and End Time — all in UTC. Source


Step 4: Re-Engage Churned Users with Promotional and Win-Back Offers

Acquiring a new subscriber costs far more than re-engaging a lapsed one. iOS and Android both support offer types specifically designed for this.

iOS Promotional Offers (Existing + Lapsed Users)

Promotional Offers let you apply custom pricing (free period, pay-as-you-go discount, or pay-up-front deal) to users who have already subscribed — a segment introductory offers explicitly exclude. Source

Three offer types available: - Free — analogous to a free trial (e.g., "1 month free to come back") - Pay-as-you-go — reduced rate for N periods (e.g., "$0.99/month for 3 months") - Pay-up-front — one-time payment for a multi-month block Source

Set up via RevenueCat Customer Center: The easiest path is configuring promotional offers in the Customer Center dashboard — they can be automatically triggered when a user initiates a cancellation or refund request. Source

iOS Win-Back Offers (iOS 18+ Only)

Apple introduced win-back offers in iOS 18 — these target users who have canceled and fully lapsed, a group that Promotional Offers don't reach. Source

Win-back offers can be displayed directly on your paywall for eligible users, using RevenueCat iOS SDK 5.x+. They require an In-App Purchase Key configured in your RevenueCat dashboard.

Introductory Offers (New Users Only)

For brand-new users, introductory offers (including free trials) are applied automatically by the App Store and Play Store when the user is eligible. Key eligibility rule for iOS: a user is eligible if they haven't previously used an introductory offer for any product in the same Subscription Group. Source

You can check eligibility in code to show the right paywall copy:

// Check if user is eligible for an introductory offer
// so you can show "Start free trial" vs. "Subscribe now"
Purchases.shared.getOfferings { offerings, error in
    guard let package = offerings?.current?.monthly else { return }

    switch package.storeProduct.introductoryDiscount?.paymentMode {
    case .freeTrial:
        showPaywallCopy("Start your 7-day free trial")
    case .payAsYouGo, .payUpFront:
        showPaywallCopy("Get 50% off your first month")
    case .none, nil:
        showPaywallCopy("Subscribe now")
    }
}

Step 5: A Prioritized Action Plan

Stop trying to do everything at once. Here's the order that typically yields the fastest wins:

Week 1-2: Diagnose

  1. Open Initial Conversion chart in RevenueCat. Set a 7-day conversion window. Note your baseline rate.
  2. Open Trial Conversion Rate chart. Note your conversion rate segmented by product duration (monthly vs. annual).
  3. Open Realized LTV per Customer at 30-day and 90-day lifetime windows. Compare recent cohorts to 6-months-ago cohorts.

If Initial Conversion < ~2-3%: Your first experiment should be testing a free trial (if you don't have one) or extending your trial length.

If Trial Conversion < ~50-60%: Your price is likely too high relative to perceived value, or the trial is too short. Test a longer trial or lower post-trial price.

If LTV is flat or falling over cohorts: You have a retention/churn problem that pricing alone can't fix — but experimenting with annual plan prominence can buy you more LTV per customer.

Week 3-6: Experiment

  1. Create your first experiment. Start simple: one variable (trial presence OR price, not both).
  2. Ensure your paywall is dynamic and fetches current Offering.
  3. Run until you hit 95%+ Chance to Win on your primary metric, or until it's clear there's no meaningful difference.

Month 2+: Personalize

  1. Add a Targeting Rule for your highest-value segment (e.g., paid-acquisition users, professional use case).
  2. Set up a Promotional Offer in Customer Center to intercept cancellations with a retention discount.
  3. If your iOS user base is on iOS 18+, set up a Win-Back Offer for lapsed subscribers.

Ongoing: Monitor

  1. Save your baseline charts with your preferred settings in RevenueCat and review them monthly. Any new cohort that underperforms your historical baseline by >10% is a signal worth investigating.

Key Rules to Remember

  • Never change production pricing without a test. What works for one app in one category will be different in yours — RevenueCat's data gives you the framework; your own cohorts give you the answer.
  • One variable per experiment. Changing price AND trial length simultaneously makes it impossible to know which change drove the result.
  • Annual vs. monthly is not zero-sum. Use the product-level breakdown in Experiment Results to see how each duration performs within a variant.
  • Introductory offers ≠ Promotional Offers. They target different users and require different setup. Don't conflate them.
  • Wait for cohort maturity before concluding. A 7-day trial experiment needs at least 7 days of data just to see if trial starts converted — and weeks more to see LTV impacts.

Sources