https://www.growthbook.io/ logo
Join Slack
Powered by
# experimentation
  • c

    creamy-breakfast-20482

    07/22/2025, 1:16 PM
    Hey there! My team would like to run a non-inferiority test using GrowthBook but I'm having trouble setting this up. Is this type of experiment supported by the platform?
    👀 1
    s
    • 2
    • 3
  • l

    late-ambulance-66508

    07/27/2025, 10:21 AM
    Hi! I have a request to adjust the retention metric calculation. Currently, the denominator includes all users, which I believe is incorrect for longer-term metrics. Take Week 1 Retention, for example. During the first 7 days after the experiment starts, the metric is 0 divided by the total number of users, and only gradually starts increasing afterward. This creates a misleading picture, as the metric naturally grows over time. I think the denominator should include only users who had a chance to reach 7+ days in the app — in other words, those whose exposure date allows for observing their Week 1 behavior. This way, we can avoid the issue of the metric artificially inflating throughout the experiment just due to time passing.
    s
    • 2
    • 4
  • l

    late-ambulance-66508

    07/28/2025, 8:42 AM
    Another feature request: it would be convenient to be able to overwrite the timestamp value for a specific metric. We have several datetime fields in the fact table that can be used as primary fields depending on the metric.
    s
    • 2
    • 3
  • l

    little-balloon-64875

    07/31/2025, 8:14 PM
    Trying to run an experiment but only when a query parameter is one of two options on the URL, in my case two affiliate ids. I've tried targeting with regex against both the URL and the query but can't find the right way to make this happen. Example URLs (only needs to target two aff ids) •
    <https://mydomain.com/?source=affiliates&aff_id=12345&tracking_id=xyz123&ref=12345>
    •
    <https://mydomain.com/?source=affiliates&aff_id=abcdefg&tracking_id=abc789&ref=abcdefg>
    Regex patterns I've tried that aren't working, both on
    url
    and `query`: 1.
    aff_id=(12345|abcdefg)
    2.
    [?&]aff_id=(12345|abcdefg)(&|$)
    3.
    \?.*aff_id=(12345|abcdefg)
    Feels like it's far too difficult to target experiments on specific URLs, is this something on the roadmap or maybe I'm missing with experiments?
    s
    • 2
    • 4
  • c

    cold-london-76468

    08/01/2025, 3:54 PM
    Hi GrowthBook team, We are experiencing some unexpected behavior with our GrowthBook experiments and would appreciate your input. Issue summary: • The
    experiment_started
    event is being triggered on random pages across our site, not just on the intended experiment page(s). • The experiment is configured to run only for registered users, but it is also being triggered for leads (unregistered users). Questions: • Is it possible for GrowthBook to trigger the
    experiment_started
    event on pages outside of the specified targeting rules, or for users who do not meet the targeting criteria (e.g., leads)? • Or does this indicate a misconfiguration on our end, and such behavior should not be possible if GrowthBook is set up correctly? We want to make sure we are not missing anything in our setup. Any guidance or clarification would be greatly appreciated! Thank you!
    h
    • 2
    • 6
  • c

    cold-london-76468

    08/05/2025, 4:11 PM
    Hi GrowthBook team, Could you also explain when the
    Copy code
    trackingCallback
    method should be call, because it's no clear from documentation
    a
    f
    • 3
    • 5
  • a

    adamant-journalist-86489

    08/20/2025, 11:47 AM
    Hello, I have a question on roles. I gave our analysts the role “analyst” and I expect them to be able to configure experiments (set the right metrics etc) and to analyse these. However it appears that the role “analyst” does not allow them to make changes to the configuration of an experiment but that they need the “experimenter” role for that. But that role also gives them the right to modify features, which is not what we want. --> Am I right on how the roles currently work? --> Wouldn’t it make more sense to give the analyst role also the rights to configure experiments? We are on the pro plan by the way.
    b
    • 2
    • 3
  • w

    wide-animal-47146

    08/20/2025, 8:39 PM
    Hi Growthbook team! I'm running into a data collection issue with our experiment that started on 08/05. Issue: Secondary metric (Activation Rate) appears to only be collecting data from 08/12 onwards. However, the primary metric (Appointment Booked) are correctly showing data from 08/05 start date. Our Activation Rate is based on bookings done, and the denominator already reflects users who have booked appointments. Given this dependency, it's puzzling why data collection would start 7 days later than the primary booking metric. I do see that the Activation Rate metric is showing a flag: "analysis or metric setting do not match current version." Questions: What could cause a secondary metric to have a different data collection start date than the primary metrics in the same experiment? Since the Activation Rate depends on the same booking events that are showing data from 08/05, how could the data collection timing be so different? Is this a configuration issue on my end, or something else I should check? Any guidance on resolving this would be greatly appreciated! Thanks!
    s
    d
    • 3
    • 8
  • h

    hundreds-stone-43870

    08/26/2025, 3:43 PM
    Hello everyone, I recently started a 50/50 experiment on my e-commerce site. I can't figure out why I'm not getting a 50/50 split in my experiment. What are the usual reasons for this kind of behavior ? Thx for you help !
    s
    • 2
    • 17
  • h

    hallowed-rainbow-76677

    08/26/2025, 6:02 PM
    Hello! Is it statistically safe to run an experiment covering both desktop and mobile web users ("device" dimension) and then analyze results by filtering by Device Dimension ? The other - more conservative - approach would be to run two differente experiments, one targetted to "device = desktop" users, the other to "device = mobile" users. The cons I see about runnning two different experiments is that we will need more time to gather sufficient data. I am using a GA4 Big Query Data Source.
    s
    h
    • 3
    • 2
  • l

    little-balloon-64875

    08/27/2025, 12:02 PM
    When using URL Redirects, there is not an "Experiment Viewed" event being fired to Segment as there typically is when a normal experiment is viewed. Using react-sdk, the docs seemed to indicate that just setting up a URL Redirect as part of the experiment would automatically fire off the callback.
    Copy code
    const gb = new GrowthBook({
      apiHost: [redacted],
      clientKey: [redacted],
      decryptionKey: [redacted],
      trackingCallback: onExperimentViewed,
    });
    s
    • 2
    • 3
  • h

    hundreds-student-36571

    08/29/2025, 4:25 PM
    Hi! Our team is using GrowthBook together with Amplitude. I’m a client-side developer and I’d like to clarify one point.
    Copy code
    export const gbInstance = new GrowthBook({
      apiHost: import.meta.env.VUE_APP_GROWTHBOOK_API_HOST,
      clientKey: import.meta.env.VUE_APP_GROWTHBOOK_CLIENT_KEY,
      enableDevMode: !isProduction(),
      plugins: [autoAttributesPlugin()],
      trackingCallback: (experiment, result) => {
        $analytics.track({
          event_type: 'Experiment Viewed',
          event_properties: {
            experimentId: experiment.key,
            variationId: result.key
          }
        });
        console.log(`key-${experiment.key} result-${result.key}`);
      }
    });
    const initializeGrowthBook = async () => {
      try {
        if ($analytics) {
          gbInstance.updateAttributes({
            id: $analytics.amplitude.getUserId(),
            deviceId: $analytics.amplitude.getDeviceId()
          });
        }
        await gbInstance.init({ streaming: true });
        gbFlags.initialize(gbInstance);
        return gbInstance;
      } catch (e) {
        return null;
      }
    };
    I’m implementing it like this, and in the plugins I specify
    autoAttributesPlugin()
    . That generates attributes for targeting (one of them is
    id
    ). When setting up an experiment in the admin panel, you can assign it based on
    anonymous_id
    or
    user_id
    . How is this
    id
    connected with
    user_id
    or
    anonymous_id
    ? Or do I need to explicitly set
    user_id
    and
    anonymous_id
    when initializing? If so, how should this be connected with Amplitude, since it has both
    user_id
    and
    device_id
    ? (edited)
    s
    • 2
    • 7
  • h

    hundreds-student-36571

    09/01/2025, 6:15 AM
    Hello everyone, my growthbook is integrated with the amplitude service via bigquery. Amplitude has such a concept as user_properies, how can these user_properies be added to growthbook. We need these user_properties to work with feature flags/experiments. I understand this is an analogue of attributes in growthbook.
    b
    s
    • 3
    • 3
  • n

    narrow-horse-42795

    09/04/2025, 7:58 AM
    Hi everyone, I recently set up the self-hosted GrowthBook using the GrowthBook Docker setup. For the past couple of days, I haven’t been able to create features. It shows the error “Holdout not found”, even though I haven’t enabled the Holdouts feature. Any guidance on how to resolve this would be greatly appreciated. Thanks!
    a
    • 2
    • 3
  • c

    crooked-market-21946

    09/11/2025, 10:47 AM
    Hi everyone, I recently signed up privately with Growthbook and would love to trial the experimentation capabilities for outbound call campaigns. Is there anyone that can help with a walkthrough on how to tie up the data with treatment and control customer lists? (eg. adding in campaign name, campaign goal, campaign results, treatment list, control list in a way that spits out the statsig?)
  • g

    green-yacht-27275

    09/15/2025, 2:09 AM
    Hi,our team is utilizing GrowthBook for A/B testing on our Android application, which serves a global user base from different countries. A specific experiment, set Android device ID as the assignment attribute with a 50% traffic allocation. We observed a discrepancy on the first day of the experiment: only 8% of eligible devices were included, rather than the expected 50%. on the First day 8% users were included on the second day 12% users were included on the third day 17% users were included this data comes from Firebase user event tracking data anybody know if it is normal? How can we include 50% of users for the experiment as soon as possible We kindly request some assistance in diagnosing this issue. Thanks very much!
    f
    • 2
    • 4
  • b

    billowy-motherboard-18512

    09/29/2025, 9:51 AM
    Hey Growthbook team, Wanted to check the best approach for splitting users 50/50 across saved groups (SGs). Example: • SG-1 → 100 users → (50 control / 50 variation) • SG-2 → 50 users → (25 / 25) • SG-3 → 5000 users → (2500 / 2500) The goal is to ensure the control/variation split happens proportionally within each SG. One easy way is creating separate experiments for each SG, but with ~100 SGs that becomes messy to set up and track. Is there a more scalable way (like using a single experiment with attribute-based targeting or hashing on
    user_id
    within each SG) so the split auto-balances across all SGs, instead of 100 separate experiments? Thank you!
    s
    r
    • 3
    • 5
  • r

    refined-musician-86340

    10/03/2025, 8:58 PM
    I want to experiment with backend changes that are exposed in equal percentage buckets within different cohorts (similar to the above question). Based on the above discussion it seems that the best way to do this is to create a feature flag, create experiments with targeting conditions and add all of my experiments to the same feature flag and then send all of my users to this feature to get their assignment value? Would this ensure that each experiment has the same population split percentage wise but not necessarily in terms of overall number?
    s
    • 2
    • 3
  • p

    prehistoric-horse-37258

    10/11/2025, 2:54 PM
    I'm working on getting sticky bucketing to work with fallback attributes. I have an experiment that is setup to use the anonymized id (that all users have) as the fallback attribute and the email (which only registered users have). When the user starts as anonymized and then logs in with an email account, sticky bucketing works as expected. They continue to see the first variation they had with their anonymous id. The opposite does not work, however. If a user is already logged in and accesses an experiment for the first time, and then logs out, and accesses the experiment again, they will see a different variation. This seems to be because, on the first experiment access, Growthbook only saves the variation from the email address to the sticky bucket server. It seems that it should also have the anonymous id together with that first variation. Does this make sense or am I missing something? Thanks
    s
    h
    • 3
    • 3
  • l

    lemon-book-46596

    10/15/2025, 8:14 AM
    Hi all, I'm working on implementing an A/B test with a planned 50/50 population split. Both control and variation groups will generate computations, which will then be processed by a downstream service to filter out entries that don't meet business rules We anticipate that the control group will be filtered out 1% more than the variation group. As a result, the experiment may be flagged as unhealthy due to an observed imbalance (49/51 instead of the expected 50/50) My question is: since this imbalance is expected, can the experiment results still be considered reliable even if it's marked as unhealthy? Put another way, do the statistical calculations depend on achieving the expected 50/50 split, or is that split simply used as a diagnostic to highlight unexpected behavior ?
    s
    • 2
    • 3
  • v

    victorious-knife-68405

    10/15/2025, 8:35 AM
    Not sure where to ask this but we're experiencing some unexpected behaviour when running experiments and attempting to use the
    tracking
    callbacks the
    GrowthBook-Swift
    SDK provides. We have set up a new project with 3 feature flags and only one experiment with 2 variations. The problem we're seeing is that the variation names (that are set in GrowthBook Web admin GUI) are not being sent to the SDK as seen in the network response and making it rather hard to track the experiment (see
    $.features['tempad-locations-search'].rules[*].meta[*]
    objects missing `name`: https://private-user-images.githubusercontent.com/1733327/500902721-28f50209-5a17-4fc6-9a13-a64a9[…]QifQ.tr7ayPWTywtOlAUXGCJ1KUDqpcfRIEmW4IFWgJ90yQk Full description here - https://github.com/growthbook/growthbook-swift/issues/132
    l
    b
    • 3
    • 10
  • v

    victorious-van-20692

    10/15/2025, 12:49 PM
    Hey everyone 👋 we’re encountering an issue where an experiment remains active even when the targeting attribute value we send doesn’t match the one defined in the experiment’s conditions. In practice, the user keeps being assigned to the experiment even though the attribute value should exclude them. We verified this using the GrowthBook browser extension, and it looks like the IF condition based on that attribute is being ignored — the experiment is still triggered no matter what. We’ve already shared the SDK payload with the support team, and from their side the configuration looks correct, but the behavior persists. Has anyone else experienced something similar or found a reason why attribute-based targeting might not be applied as expected? We’re using the JS SDK client-side and setting attributes via growthbook.setAttributes(). Any insights or debugging tips would be really appreciated 🙏
    f
    m
    • 3
    • 3
  • m

    microscopic-gigabyte-19983

    10/16/2025, 2:46 PM
    Hey all, We just started our first real experiment last night after a successful A/A test. The A/B experiment targets desktop only, whereas the A/A test was all users. This is the only difference we can see between them, yet the A/B test is receiving no data at all. I've tried making changes to the A/A test so as not to make things worse with the A/B test, and it just got stranger: • Added attribute-targeting for desktop only in a new phase - the experiment went unhealthy with a 77/23% split, and if I changed the dimension to device I see data for phone and tablet • Changed the attribute-targeting to chrome only in a new phase - the experiment health changed to a 98/2% split! Viewing with the dimension to browser showed data for other browsers • Removed the attribute-targeting completely - the experiment was still unhealthy with a 54/46% split • Created a new phase with no changes - healthy! Each time I created a new phase I re-randomised traffic. We are at a loss for what is causing this, as nothing seems to be a constant
    s
    • 2
    • 1
  • a

    adorable-bear-66287

    10/22/2025, 12:05 PM
    Hi, we're facing a problem with time travel (or lack off) in our experiments and metrics. We have some really slow moving metrics that are dependent on manual processes - meaning conversion can happen 14+ days after exposure. This works fine normally but sometimes we have multiple experiments lined up and want to end the current experiment before we have a significant result (because we will later on - we don't need more exposures, just conversions). When we do this, it seems like the results/metrics also are limited to the exposure period and we don't get these lagging effects represented correctly. When the experiment is over, all traffic for the experiment is routed to the baseline-option - also leading to Sample Ratio Mismatch for the baseline. More information in thread.
    ➕ 1
    s
    • 2
    • 5
  • p

    plain-cat-20969

    10/28/2025, 12:53 AM
    Hi all, I’m running into some challenges explaining results to eager stakeholders. In a few of our experiments, the metrics naturally lag.For example, when we include users in a variation but don’t expect an effect until several days later. Think of tracking cancellations when users receive an email five days after signup, versus those who don’t. Some stakeholders check results just a few days in (say, days 1-4) and notice differences between the two groups, then ask why. Even after explaining normal metric variance, one person raised a fair question: Wouldn’t variation B have a built-in disadvantage if it starts lower before the treatment even happens? I feel like this falls under the concept of random variance, but it also made me wonder whether there’s something to his point. Has anyone else run into this kind of situation? How did you handle it?
    l
    s
    • 3
    • 5
  • b

    busy-megabyte-43386

    10/28/2025, 12:32 PM
    Hello, We’ve encountered a serious issue that seems to be connected to your recent implementation changes. After investigating, I noticed that the response format for /api/eval/apiKey has been completely changed. As a result, our app can no longer parse the experiment data, which has broken all our experiments. This issue is critical and quite costly for us. Could you please revert to the previous response model and release a fix as soon as possible? Left screen – current response, right screen – last response we use
    👀 2
    h
    • 2
    • 1
  • h

    hundreds-stone-43870

    11/07/2025, 3:05 PM
    Hello, we are unable to create an A/B test with a 50/50 split. We have set a fallback from GAID to a UUID generated with the documentation, but even so, we are seeing a split closer to 60/40. Has anyone else encountered this problem ?
    r
    • 2
    • 3
  • c

    creamy-minister-96874

    11/17/2025, 11:57 AM
    Hello, I stopped experiment, but even after 1 day still some users were still getting variations, while it should be default? Why it is like this?
    s
    • 2
    • 1
  • m

    many-dusk-66209

    11/22/2025, 2:23 PM
    Hi all! We have a mobile app (Android and iOS). Share your experience running an A/B test on subscription pricing. What challenges did you encounter?
  • f

    fresh-lizard-81136

    11/27/2025, 11:40 AM
    Hey all, I’m running a PDP experiment on our apparel site where the treatment is inside our sizing tool on the PDP, but the experiment is scoped to all product pages. Only ~25% of visitors use the sizing tool flow we’re testing so I only want to include those users who used the sizing tool for the analysis. To narrow the dataset, I’ve tried two approaches: 1. Activation metric: using a GA4 event (
    start_sizing_tool
    ) as the activation metric. 2. Segment: creating a segment that includes only users who have triggered
    start_sizing_tool
    . Both these approaches use the same GA4 event In theory these should produce similar experiment populations, but in practice they’re very different - the activation metric looks accurate when I compare to GA4 over the same period of the test but the segment cuts my experiment population by more than half. Why does activation vs. segment produce such different populations, and which is the correct approach for an experiment where only sizing-tool users should be evaluated?