https://www.growthbook.io/ logo
Join Slack
Powered by
# ask-questions
  • a

    able-stone-9657

    11/15/2025, 4:09 AM
    Background: I'm experiencing an issue with the Flags Explorer setup in Next.js where the
    getProviderData()
    function from
    @flags-sdk/growthbook
    doesn't seem to be mapping flag options from experiment variations tied to that flag in growthbook. There's a discrepancy between two different documentation approaches for next.js + vercel flags explorer: 1. Next.js and Vercel Feature Flags Tutorial - manual implementation of
    getFlagApiData()
    that explicitly loops through the GrowthBook payload to extract options from rules, variations, and force values. (older) 2. Next.js SDK (Vercel Flags) Documentation - Uses the built-in
    getProviderData()
    function from
    @flags-sdk/growthbook
    package, which should handle this automatically. (new) Issue: When using the SDK's
    getProviderData()
    function as shown in the newer docs, the flags appear in the toolbar with the default as their only option. Question: Is this a known limitation of
    getProviderData()
    , or did I miss something in the docs?
    👀 1
    s
    • 2
    • 1
  • g

    green-church-13845

    11/17/2025, 7:32 AM
    Hello growthbook👋 we want to start testing a new feature. We want to proceed very cautiously and initially only include 10% of the user base (with a 50:50 split). If we see after 1-2 days that there are no problems, we want to roll it out to 30% of the user base, and so on. What is the best approach here? For example, can we use a namespace and adjust it during the experiment? Looking forward to your feedback. Thank you in advance 🙏
    ✅ 1
    f
    • 2
    • 6
  • m

    mysterious-florist-78871

    11/17/2025, 1:17 PM
    Quick question; Is the Growthbook Proxy a requirement for streaming to work? Without a proxy, I can't seem to get it working. (I've not tested with a proxy)
    f
    • 2
    • 4
  • w

    witty-rocket-75751

    11/18/2025, 6:57 AM
    Hey Growthbook Team, We were getting a lot of initialization failure for our growthbook setup in flutter, with the error
    failed to lookup host
    , and wanted to implement a retry mechanism when initialization fails. GBSDKBuilder provides a failure callback but seems like there is no init retry mechanism build around failure handling, our init logic is written in a singleton pattern so creation new object if init fails doesn't seem viable either, so wanted to understand how can we tackle this problem are there any helpful inbuild methods whcih gb provides for init retry?
    f
    f
    • 3
    • 7
  • f

    fresh-pencil-44589

    11/18/2025, 3:27 PM
    We’ve been using growthbook for a few minutes with the client side SDK, but we’re looking to swap over to lambda edge. I’ve been trying to get it working but not having much luck. I’ve confirmed a basic lambda function is configured w/ our cloudfront, but once I add the growthbook specific handler, I run into errors. When I deploy the lambda version that uses growthbook, I’m seeing logs with this error:
    Copy code
    2025-11-18T15:22:00.113Z	a762f8a1-1df6-4c0b-98aa-686ec37e6ff8	ERROR	Unhandled Promise Rejection 	
    {
        "errorType": "Runtime.UnhandledPromiseRejection",
        "errorMessage": "TypeError: Invalid URL",
        "reason": {
            "errorType": "TypeError",
            "errorMessage": "Invalid URL",
            "code": "ERR_INVALID_URL",
            "input": "undefined",
            "stack": [
                "TypeError: Invalid URL",
                "    at new URL (node:internal/url:825:25)",
                "    at Ms (/var/task/index.js:31:69858)",
                "    at Object.w3 [as proxyRequest] (/var/task/index.js:31:75389)",
                "    at /var/task/index.js:31:65455",
                "    at Generator.next (<anonymous>)",
                "    at s (/var/task/index.js:31:64093)"
            ]
        },
        "promise": {},
        "stack": [
            "Runtime.UnhandledPromiseRejection: TypeError: Invalid URL",
            "    at process.<anonymous> (file:///var/runtime/index.mjs:1448:17)",
            "    at process.emit (node:events:519:28)",
            "    at emitUnhandledRejection (node:internal/process/promises:252:13)",
            "    at throwUnhandledRejectionsMode (node:internal/process/promises:388:19)",
            "    at processPromiseRejections (node:internal/process/promises:475:17)",
            "    at process.processTicksAndRejections (node:internal/process/task_queues:106:32)"
        ]
    }
    Nothing is growthbook specific in those logs, but I’m guessing it’s coming from something growthbook is doing. A couple of questions: 1. PROXY_TARGET — I’m pointing this to our S3 bucket that serves the website, which is
    http
    , not
    https
    — is that okay? If I use https, that’d point back to a cloudfront distro URL and I think it’d go into some kind of infinite redirect loop 2. Is there a way to test this w/o having to redeploy my cloudfront distro and wait for the website to break
    s
    • 2
    • 10
  • b

    billions-house-96196

    11/18/2025, 5:00 PM
    I'm thinking about query cost/performance optimization with the pre-compute dimensions setting. What are the tradeoffs I'm making by selecting pre-compute??
    h
    • 2
    • 5
  • c

    clever-hair-5481

    11/18/2025, 5:30 PM
    Hi, I’m wondering why I’m getting this “Invalid experiment” error? I’ve set up a 50/50 experiment and am trying to associate it
    s
    • 2
    • 15
  • m

    microscopic-ocean-92851

    11/18/2025, 10:41 PM
    Hey there, I am looking to implement a user retention metric after viewing an experiment. I'm using:
    Metric: Retention
    Fact table: Events, Row filter: event_name = 'page_view'
    However the lowest time I am able to set is: Event must be at least
    1 Minutes
    after experiment exposure Which effectively is
    timestamp >= (exposure_timestamp + '1 minutes')
    However, I wish I could use:
    timestamp > (exposure_timestamp)
    Note: not equal Is this possible with a fact metric?
    h
    • 2
    • 28
  • m

    magnificent-furniture-52657

    11/19/2025, 12:29 PM
    I have a question regarding the growthbook setup in nuxt 2. To make it work, I had to fetch features through url, as the setup in documentation did not work. Is it expected for nuxt2? Do I still get the cashing features of growthbook even if I dont follow the documentation for vue?
    s
    • 2
    • 1
  • d

    damp-boots-45877

    11/20/2025, 5:54 AM
    Hi. How fact table optimization feature works? I want to know details, and impact on user side.
    f
    • 2
    • 1
  • d

    damp-boots-45877

    11/20/2025, 6:41 AM
    ----------------- Hi, I'm using the GrowthBook experiment platform with a fact table–based data source (BigQuery). When I define N metrics on top of the fact table, GrowthBook is able to compute all metrics with a single underlying query, which is very efficient. However, when I view Experiment → Results and apply a segment, GrowthBook stops using the single fact-table query and instead executes *N separate queries*—one for each metric. This makes the query execution significantly less efficient and increases BigQuery cost. Is this expected behavior? And is there any way to preserve the single-query optimization when segmentation is applied?
    👀 1
    f
    h
    • 3
    • 6
  • m

    many-dusk-66209

    11/21/2025, 1:00 PM
    Hi all! We have a mobile app (Android and iOS). Share your experience running an A/B test on subscription pricing. What challenges did you encounter?
    f
    • 2
    • 3
  • a

    ancient-morning-80869

    11/24/2025, 7:38 PM
    Hello, we are currently running on version 3.3.0 and would like to upgrade to the newest 4.2.0, is there any issues with doing that or is there any special considerations we should be aware of?
    s
    f
    • 3
    • 4
  • i

    incalculable-nightfall-35029

    11/25/2025, 5:48 AM
    Hi, What is the difference between these two metric definitions? 1. Ratio metric with
    Numerator = SUM(profit per order)
    ,
    Denominator = Unique Users
    2. Mean metric with
    per user aggregation = SUM(profit per order)
    . When defining the Mean metric (2), the Growthbook interface says:
    The final metric value will be the average per-user value for all users in the experiment. Any user without a matching row will have a value of 0 and will still contribute to this average.
    So I don't understand the difference between a denominator of "Unique Users" in 1 and "average per-user value" in 2. I would expect these two metrics to be identical. However, on the experiment overview page I can see they have the same numerator value (1.7m), but the Ratio (1) has 5k Unique Users for the Denominator, while Mean (2) has a denominator of 83k. We're doing an A/A test (no difference between the two treatments) and are seeing Growthbook report a statistically significant Relative uplift for the Ratio Metric, but not for the Mean metric. I would like to understand the details of what is happening behind the scenes a bit better.
    f
    • 2
    • 3
  • f

    famous-processor-35340

    11/26/2025, 2:16 PM
    Hi all, very simple question but haven't managed to get an answers from the docs. Is it possible to set up the end date / time of an experiment in advance?
    c
    • 2
    • 3
  • q

    quiet-appointment-16980

    11/27/2025, 11:02 AM
    Hello everyone. We're trying to connect our self hosted growthbook to AWS Athena, and it creates the data source, but gives an error of: "No Database Provided". Has anyone encountered this before?
    f
    • 2
    • 3
  • b

    billions-pharmacist-74262

    11/27/2025, 11:07 PM
    Hello everyone, we’ve recently had to deal with an issue regarding Force rules that apparently cannot be edited on our self host version running v4.1.0. So, the case is we have a feature with a JSON value, it has different values depending on the environment, normally what we would do is edit the value that the rules returns for each environment, but since the last couple of weeks, on this particular feature, the edit option simply does not appear. Has anyone faced a similar issue or could maybe guide me through it? Thanks
    f
    • 2
    • 6
  • b

    big-lizard-13073

    11/28/2025, 2:30 PM
    Hi Team, We're debugging an issue at the moment where we've seen unbalanced assignments across multiple experiments (e.g. all users assigned to control in Experiment 1 are also assigned to control in Experiment 2). We've narrowed this down to an issue with the experiment seed used by the SDK in the following function:
    Copy code
    val hash = GBUtils.hash(
                stringValue = hashValue,
                hashVersion = experiment.hashVersion ?: 1,
                seed = experiment.seed ?: experiment.key
            )
    The experiment seeds provided by the Growthbook server to the SDK are not unique per experiment key, which is leading to a non-random allocation of users in an experiment. Is this a known bug / is there a fix for this?
    👀 1
  • f

    fancy-helicopter-64657

    11/28/2025, 2:39 PM
    Is the dashboard down? I can't seem to load the dashboard right now
    d
    • 2
    • 2
  • r

    red-oxygen-62328

    11/29/2025, 4:44 PM
    Hi team! We're testing out Bandits, and noticed that the only decision options for Exploration and Updates are time-based (days/hours) as opposed to data-based. It doesn't look like there's currently a way to have anything else inform this, but is it in the roadmap to add a more data-based approach to this? (ie. Sample size for primary metric?)
    👀 1
    • 1
    • 1
  • c

    cuddly-action-64036

    12/01/2025, 7:53 AM
    Hello guys. I am Sinan. I need some help with using the Visual Editor in GrowthBook. I integrated GrowthBook through Google Tag Manager (GTM), but I’m getting a warning message and I’m unable to use the Visual Editor. The documentation says that I should verify the “Tracking via DataLayer and GTM” step in order for the Visual Editor to work, but even after checking that, the issue still persists. Has anyone experienced this before or knows how to fix it? Any guidance would be greatly appreciated. Thanks in advance!
    s
    • 2
    • 17
  • c

    calm-dog-24239

    12/02/2025, 10:00 AM
    Hi team. We are currently looking into implementing a feature request in the GrowthBook C# SDK related to Custom Fields. The request is to add Custom Fields as a property to the
    Experiment
    object within the SDK so that they can be retrieved in application code. Since Custom Fields is an Enterprise-level feature, we need to confirm the expected API behavior. When fetching the feature/experiment configuration (the main JSON payload), does the GrowthBook API include the customFields object directly within the experiment definitions?
  • a

    alert-exabyte-3603

    12/02/2025, 9:30 PM
    Hi team, I'm trying to enable Metric Slices for one of the metric that our team created per guidance here. but I'm not able to find this surface from the metric's info page. May I know how should I find it? Thanks
    s
    • 2
    • 8
  • a

    ambitious-airport-565

    12/02/2025, 9:59 PM
    👋 I recently upgraded to 4.2 and am now seeing this error on percentile capped metrics:
    within group ORDER BY clauses for aggregate functions must be the same
    . • DB: redshift • failing query section:
    Copy code
    __capValue AS (
        SELECT
          PERCENTILE_CONT(0.999) WITHIN GROUP (
            ORDER BY
              m0_value
          ) AS m0_value_cap,
          PERCENTILE_CONT(0.999) WITHIN GROUP (
            ORDER BY
              m0_denominator
          ) AS m0_denominator_cap
        FROM
          __userMetricAgg
      )
    The error arises from the capValue CTE calculating 2 percentiles with different order bys. If the pattern leveraged separate CTEs for each percentile, it would run Is this a known issues that is being worked on?
    👀 1
    🙋‍♂️ 2
    s
    f
    • 3
    • 4
  • a

    able-flag-99402

    12/03/2025, 12:51 PM
    Hey! When using percentile capping on ratio metrics, it's not the value metric that is capped but rather the numerator (leaving the denominator untouched). This is not what I expected and I don't think this makes sense because for ratio metrics we care about the value of the metric and not the value of the numerator in isolation. My concrete example is looking at gross margins where I'm experiencing weird results which seems to be related to outliers. However, adding a percentile cap doesn't help since that just lowers the numerator making the % gross margin into a unrealistically low number which in turn will make it difficult to explain the data to others. Hope this made sense and let me know if I should clarify anything!
    👀 1
    h
    • 2
    • 1
  • b

    big-crayon-53518

    12/03/2025, 2:40 PM
    Hi y'all, I'm new to 4.2 and had a question about retention metrics. I noticed when calculating a >=7 day retention metric for an experiment, the numerator correctly calculates the users who return after 7 days, but the denominator is still including all users. Is there a way I can limit the denominator to only include users with at least 7 days in the test? This also holds true for row filters, as I would like to apply row filters to both the numerator and denominator. Thanks!
    s
    • 2
    • 2
  • a

    alert-exabyte-3603

    12/04/2025, 12:47 AM
    Hey folks, I'm trying to look into experiment result with certain dimension - however, upon looking at the helper doc, I could not find this UI on my end from the result page. May I get some help for pointers? Thanks so much!
    f
    • 2
    • 2
  • t

    thousands-alarm-48630

    12/04/2025, 6:45 AM
    What configuration should I use to build the Dockerfile locally and on the server pod? Specifically, how much memory and how many CPU cores are recommended? I want to build the GrowthBook image on the pod.
    f
    • 2
    • 3
  • n

    nutritious-dog-12771

    12/04/2025, 10:58 AM
    Hello, in the main results page of an experiment, we used to be able to filter by any dimension (even unit dimensions) and we would be able to see the distribution of the dimension value by the variants. This is very useful to check that the 2 variants are equally distributed. We were also able to check that in the dashboards, now it's not available in either places. Any way to obtain this info?
  • l

    lively-kitchen-7419

    12/04/2025, 3:36 PM
    Question: Does the (python) SDK support async hooks for tracking experiments? • the python SDK docs say an async method is supported (
    _async_ _def_ on_experiment_viewed(...)
    ) • the python source code doesn't seem to have any async behavior or types, seems to just run the method (and the README itself doesn't mention the
    async def
    option). Is this a docs discrepancy?
    • 1
    • 1