https://www.growthbook.io/ logo
Join Slack
Powered by
# experimentation
  • f

    flaky-glass-97366

    03/04/2025, 5:46 PM
    I learned today my temporary rollout did not work - our variant won, but I still see the control in production. Any tips? More in 🧵
    h
    h
    f
    • 4
    • 19
  • n

    numerous-glass-72216

    03/09/2025, 3:13 PM
    Hey there I’m running an experiment using sticky bucketing where primary attribute is user id and fallback is anonId that is from cookie. The experiment starts off from logged out state. I’m seeing around 25% of users who get bucketed into one variant but when they log in, they get rebucketed into control so our control size is larger than expected and probably throwing off our results
    r
    h
    • 3
    • 6
  • h

    helpful-glass-76730

    03/10/2025, 10:40 AM
    Hey! Anyone here that has configured metrics that measure the time between two different events? For example time from add to cart to purchase? I know funnel metrics for fact tables is in the works so I guess something like that will come natively in the future, but are there any workarounds that I can use today? I've tried different approaches, but no luck as of yet. General time to metrics works, but doing that approach increases the variance significantly since it would incorporate the time before the first event of interest as well. Edit: It actually seems that i've also incorrectly defined my other "time to" metrics. They've simply just measured the total engagement time for users that has triggered the specified event. I guess the only way to go about it is to create the definitions in your fact table query.
    l
    h
    • 3
    • 5
  • g

    gifted-angle-45680

    03/13/2025, 12:14 PM
    Hi all! I have a question about existing experements. I have already aggregated csv file with results of A/B tests(advertisment push stats) with fields like city, age, send pushes, opened pushes etc. Is there already existing template for this or I can create it myself?
    s
    • 2
    • 1
  • n

    narrow-lifeguard-42299

    03/18/2025, 5:20 PM
    hey all, we have an experiment running with 3 variants tracking one conversion metric (number of taps). we're running exact same query in our big query database and in growthbook to track our conversion metric, but we're only seeing about 33% of the conversions (number of taps) in the growthbook dashboard reporting experiment results than what we're seeing in our raw dataset. is there any data manipulation/filtering/aggregation happening at the experiment results level that would explain this, or any other reason why we are missing so many tile taps in our experiment dashboard vs our raw dataset?
    s
    • 2
    • 2
  • j

    jolly-oil-48997

    03/19/2025, 8:32 AM
    HI Team, I need a quick help with the one of the metrics that I had created that uses the GA4 events data as a data source coming from BigQuery. I have a metric that should get me the total no. of rows for
    category_button_click
    GA event where the its attribute is
    entryPoint
    = that has value as
    productTile
    But currently it gives me
    main_sum_square
    in the experiment. Ideally I should get the total count of the
    category_button_click
    event where the event param is
    productTile
    The combined query is
    Copy code
    -- DEV Secondary Metrics - category_button_click where the attribute is entryPoint = productTile (count)
    WITH
      __rawExperiment AS (
        SELECT
          user_id as user_id,
          TIMESTAMP_MICROS(event_timestamp) as timestamp,
          experiment_id_param.value.string_value AS experiment_id,
          variation_id_param.value.int_value AS variation_id,
          geo.country as country,
          traffic_source.source as source,
          traffic_source.medium as medium,
          device.category as device,
          device.web_info.browser as browser,
          device.operating_system as os
        FROM
          `axinan-dev`.`analytics_320528630`.`events_*`,
          UNNEST (event_params) AS experiment_id_param,
          UNNEST (event_params) AS variation_id_param
        WHERE
          (
            (_TABLE_SUFFIX BETWEEN '20250313' AND '20250319')
            OR (
              _TABLE_SUFFIX BETWEEN 'intraday_20250313' AND 'intraday_20250319'
            )
          )
          AND event_name = 'experiment_viewed'
          AND experiment_id_param.key = 'experiment_id'
          AND variation_id_param.key = 'variation_id'
          AND user_id is not null
      ),
      __experimentExposures AS (
        -- Viewed Experiment
        SELECT
          e.user_id as user_id,
          cast(e.variation_id as string) as variation,
          CAST(e.timestamp as DATETIME) as timestamp
        FROM
          __rawExperiment e
        WHERE
          e.experiment_id = 'get-quote-button'
          AND e.timestamp >= '2025-03-13 00:00:00'
          AND e.timestamp <= '2025-03-19 04:26:17'
      ),
      __experimentUnits AS (
        -- One row per user
        SELECT
          e.user_id AS user_id,
          (
            CASE
              WHEN count(distinct e.variation) > 1 THEN '__multiple__'
              ELSE max(e.variation)
            END
          ) AS variation,
          MIN(e.timestamp) AS first_exposure_timestamp
        FROM
          __experimentExposures e
        GROUP BY
          e.user_id
      ),
      __distinctUsers AS (
        SELECT
          user_id,
          cast('' as string) AS dimension,
          variation,
          first_exposure_timestamp AS timestamp,
          date_trunc(first_exposure_timestamp, DAY) AS first_exposure_date
        FROM
          __experimentUnits
      ),
      __metric as ( -- Metric (DEV Secondary Metrics - category_button_click where the attribute is entryPoint = productTile)
        SELECT
          user_id as user_id,
          m.value as value,
          CAST(m.timestamp as DATETIME) as timestamp
        FROM
          (
            SELECT
              user_id,
              user_pseudo_id AS anonymous_id,
              TIMESTAMP_MICROS(event_timestamp) AS timestamp,
              value_param.value.int_value as value
            FROM
              `axinan-dev.analytics_320528630.events_*`,
              UNNEST (event_params) AS value_param
            WHERE
              event_name = 'category_button_click'
              AND EXISTS (
                SELECT
                  1
                FROM
                  UNNEST (event_params) AS ep2
                WHERE
                  ep2.key = 'entryPoint'
                  AND ep2.value.string_value = 'productTile'
              )
              AND EXISTS (
                SELECT
                  1
                FROM
                  UNNEST (event_params) AS ep3
                WHERE
                  ep3.key = 'url'
                  AND ep3.value.string_value LIKE '%staging%'
              )
              AND (
                (_TABLE_SUFFIX BETWEEN '20250313' AND '20250319')
                OR (
                  _TABLE_SUFFIX BETWEEN 'intraday_20250313' AND 'intraday_20250319'
                )
              )
          ) m
        WHERE
          m.timestamp >= '2025-03-13 00:00:00'
          AND m.timestamp <= '2025-03-19 04:26:17'
      ),
      __userMetricJoin as (
        SELECT
          d.variation AS variation,
          d.dimension AS dimension,
          d.user_id AS user_id,
          (
            CASE
              WHEN m.timestamp >= d.timestamp
              AND m.timestamp <= '2025-03-19 04:26:17' THEN m.value
              ELSE NULL
            END
          ) as value
        FROM
          __distinctUsers d
          LEFT JOIN __metric m ON (m.user_id = d.user_id)
      ),
      __userMetricAgg as (
        -- Add in the aggregate metric value for each user
        SELECT
          umj.variation AS variation,
          umj.dimension AS dimension,
          umj.user_id,
          SUM(COALESCE(value, 0)) as value
        FROM
          __userMetricJoin umj
        GROUP BY
          umj.variation,
          umj.dimension,
          umj.user_id
      )
      -- One row per variation/dimension with aggregations
    SELECT
      m.variation AS variation,
      m.dimension AS dimension,
      COUNT(*) AS users,
      SUM(COALESCE(m.value, 0)) AS main_sum,
      SUM(POWER(COALESCE(m.value, 0), 2)) AS main_sum_squares
    FROM
      __userMetricAgg m
    GROUP BY
      m.variation,
      m.dimension
    and the query that i had created to extract the data is
    Copy code
    SELECT
     user_id,
      user_pseudo_id AS anonymous_id,
      TIMESTAMP_MICROS(event_timestamp) AS timestamp,
      value_param.value.int_value as value
    FROM
      `axinan-dev.analytics_320528630.events_*`,
      UNNEST(event_params) AS value_param
    WHERE
      event_name = '{{eventName}}'
      AND EXISTS (
        SELECT 1
        FROM UNNEST(event_params) AS ep2
        WHERE ep2.key = 'entryPoint' AND ep2.value.string_value = 'productTile'
      )
      AND EXISTS (
        SELECT 1
        FROM UNNEST(event_params) AS ep3
        WHERE ep3.key = 'url' AND ep3.value.string_value LIKE '%staging%'
      )
      AND ((_TABLE_SUFFIX BETWEEN '{{date startDateISO "yyyyMMdd"}}' AND '{{date endDateISO "yyyyMMdd"}}') OR
           (_TABLE_SUFFIX BETWEEN 'intraday_{{date startDateISO "yyyyMMdd"}}' AND 'intraday_{{date endDateISO "yyyyMMdd"}}'))
    Help will be really appreciated.
    s
    h
    • 3
    • 29
  • l

    little-balloon-64875

    03/24/2025, 1:28 PM
    We are hitting the CDN a lot. I've only been running a couple of tests here and there on our homepage, which does not get millions in traffic over the course of several weeks, but our usage shows
    3.1M
    . Which seems insane. Trying to figure out why this is so high and how to debug, prior to setting up some sort of proxy or CDN on our own.
    👍 2
    s
    • 2
    • 11
  • a

    average-whale-33542

    03/24/2025, 7:36 PM
    I am not seeing my metric in my experiment, the values stay zero even though the data is in the metrics tab. While debugging the SQL Statement I found this issue, but not sure on how to fix it, can anyone help me out or hop in a call to resolve this?
    h
    f
    • 3
    • 2
  • r

    ripe-dinner-7830

    03/27/2025, 2:54 PM
    hi! i want to a/b test the relevance of my menu links, ie how well they link to pages that contain products my visitors are interested of. i have metrics for "open menu", "clicked menu item", "viewed product list", "product click", "add to cart", "purchase" etc. what is the best way to create relevance for measuring if the changes a/b tested in our menu have positive impact on things like add to cart, purchase etc?
  • g

    gorgeous-london-26565

    03/31/2025, 12:27 PM
    Hi guys, we have a Visual Editor experiment with a simple DOM change and a URL targeting. When internally changing url to a page that matches the URL targeting, (and thereafter calling
    setUrl
    in the growthbook API), we expect the experiment to be applied. But it is only applied when the page with the experiment is the landing page. Do anyone have experience with an issue like that? More context can be found here: https://github.com/growthbook/growthbook/issues/3873
  • j

    jolly-oil-48997

    03/31/2025, 1:10 PM
    Hi guys, We are currently conducting a GB experiment in our production (PROD) environment, utilizing data sourced from a BigQuery (GA4) database. A metric has been defined to calculate the total count of
    category_button_click
    events triggered via GA4 between March 18th and March 31st. The metric has the custom query
    Copy code
    SELECT
      user_id,
      user_pseudo_id AS anonymous_id,
      TIMESTAMP_MICROS(event_timestamp) AS timestamp,
      (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'entryPoint') AS entryPoint,
      (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'url') AS url,
      1 as value
    FROM
      `axinan-prod`.`analytics_323303173`.`events_*`,
      UNNEST(event_params) AS value_param
    WHERE
      event_name = '{{eventName}}'
      AND EXISTS (
        SELECT 1
        FROM UNNEST(event_params) AS ep2
        WHERE ep2.key = 'entryPoint' AND ep2.value.string_value = 'productTile'
      )
      AND ((_TABLE_SUFFIX BETWEEN '{{date startDateISO "yyyyMMdd"}}' AND '{{date endDateISO "yyyyMMdd"}}') OR
           (_TABLE_SUFFIX BETWEEN 'intraday_{{date startDateISO "yyyyMMdd"}}' AND 'intraday_{{date endDateISO "yyyyMMdd"}}'))
           QUALIFY ROW_NUMBER() OVER (PARTITION BY event_timestamp ORDER BY event_timestamp) = 1
    My query filters for events where
    eventName
    is
    category_button_click
    . When I execute this query directly in the Google BigQuery console, I obtain a count of 3326, as shown in the screenshot. However, when I attempt to retrieve the same count using a GrowthBook metric, I receive results for 'Count of users',
    main_sum
    , and
    main_sum_squares
    instead of the expected count of
    category_button_click
    events with the parameter
    entryPoint
    equal to 'productTile'. How can I configure the GrowthBook metric to return the correct event count? Any help will be really appreciated.
    s
    • 2
    • 10
  • l

    loud-guitar-35849

    04/02/2025, 1:47 AM
    Hi! Do you have any courses you could recommend, lectures, videos on AB testing? Or maybe coaches that could guide us through? I joined the GoodUI course thanks to recent GB recommendation, I wonder if there are more, that are maybe more about the stat fundamentals in testing. And at the same time practical, for example, most don't consider the price of the change and often I'd apply an insignificant slightly positive change just because it's free in terms of code and support. I don't care about being right, I care about the bottom line. Preferably materials that include sequential approaches, Bayesian tests. I really like the GB docs, really clear, practical, concise, but I'd like to go a bit deeper. At the same time posts by Ronny Kohavi with all the papers he mentions sound too deep or just nonsense, can't tell apart. We run hundreds of tests and have great results in growing business to $1M/mo. But at the same time we cannot really confirm that with our holdout tests and LTV trend lines. We're refining our processes and I'd love to acquire confidence in my knowledge and our approach to testing.
    l
    • 2
    • 2
  • c

    clean-dentist-82655

    04/03/2025, 2:52 PM
    Hi! I've encountered an issue repeatedly where Growthbook fails to activate my A/B tests, citing the error "Skip because missing hashAttribute". In this instance, the "hashAttribute" is set to "user_id", but I've also observed the same error when using "id". Could you please explain the root cause of this issue and advise on how to rectify it?"
    👀 2
    h
    s
    • 3
    • 7
  • s

    salmon-lamp-78940

    04/09/2025, 10:59 AM
    Hello! I'm doing a test run of growthbook just like @little-balloon-64875 couple months ago and hitting the same issue. I've added around 20-30 events and I have 10 total users in experiment, but I'm not seeing a report graph. I'm using bigquery with GA4. I've set up fact table and metrics. My goal is as following: in our app we have two export versions: 3.0 and 4.0. And we have a glow effect around the 4.0 export. So I want to see, if I turn off that glow effect, would I get less 4.0 exports or not. Practically it probably doesn't need to be compared to 3.0 I guess, all I care is increase or decrease of number of 4.0 exports. Can anyone tell if I'm doing anything wrong here? Maybe there's amount of users/events threshold? I've set up minimum metric total to be 0 so it should pick the data up. In posthog, as comparison, it was quite easy and just 4-5 users in different groups were enough to see the chart. Sharing to give better picture of what I'm trying to get. (Ignore the warning, I did that kinda on purpose)
    s
    l
    • 3
    • 8
  • f

    flat-park-58308

    04/10/2025, 6:04 PM
    Hi all, we just started implementing Growthbook and our setup basically works, but I have a question around tracking Click Through Rate. We are a news publisher and our frontpage has many modules containing news articles. What we mainly test is different module versions (personalized vs. curated for example). All these module boxes have view and click tracking, so we are able to calculate a click through rate. However, what I don't understand is, how we would be able to create a general click through rate metric, that works for the exact module box that is in the specific experiment. We can manually create separate click through rates for each module, but that seems exhaustive. Any tips and tricks? Can I use some kind of variable in the SQL, that filters for an experiment id? Other workarounds? Thanks for your help!
    s
    • 2
    • 3
  • s

    shy-river-35647

    04/10/2025, 6:22 PM
    Hi Team, I had a quick question on re-ramping experiments. Based on the wiki here, we figured that re-ramping an experiment would re-bucket the users with a new experiment key. I do see a new experiment key after the re-ramp with a <test_name>
    -re
    appended.. But in the data warehouse we still see traffic with old name <test_name>.. Is this expected ? We were trying to find traffic with the new key but no data was found.. Wanted to confirm if using the old experiment key would still re-bucket the users..
    s
    • 2
    • 8
  • c

    colossal-ability-68565

    04/14/2025, 7:28 PM
    Hi GrowthBook team, I'm facing an issue where a custom metric I created in GrowthBook returns different results from what I get when querying directly in BigQuery. I'm trying to configure everything inside GrowthBook to match the numbers from BigQuery, which are highly reliable in our case. Context 1. I’ve created a Fact Table in GrowthBook based on a BigQuery query. 2. Then I set up a Ratio metric to calculate the percentage of users with a specific value in the
    return
    field (
    "error"
    ). 3. However, the value returned by GrowthBook does not match the expected result calculated directly in BigQuery using the same logic. BigQuery Reference Query
    Copy code
    sql
    WITH fact_table AS (
      SELECT
        e.user_id,
        e.user_pseudo_id AS anonymous_id,
        TIMESTAMP_MICROS(event_timestamp) AS timestamp,
        (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'description') AS description,
        (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'socialNetworkName') AS socialNetworkName,
        (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'return') AS return
      FROM
        `******.analytics_2********2.events_*` e
      WHERE
        event_name = "connection_social_network"
        AND (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'socialNetworkName') = "facebook"
        AND (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'socialNetworkName') IS NOT NULL
        AND (
          (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'return') = 'success'
          OR (
            (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'return') = 'error'
            AND (SELECT value.string_value FROM UNNEST(event_params) WHERE key = 'description') IS NOT NULL
          )
        )
        AND _TABLE_SUFFIX BETWEEN FORMAT_DATE('%Y%m%d', DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY))
        AND FORMAT_DATE('%Y%m%d', DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY))
    )
    
    SELECT
      DATE(timestamp, 'America/Sao_Paulo') AS dia,
      COUNTIF(return = 'error') AS numerador,
      COUNT(*) AS denominador,
      ROUND(COUNTIF(return = 'error') / COUNT(*) * 100, 2) AS percent_error,
      ROUND(COUNTIF(return = 'success') / COUNT(*) * 100, 2) AS percent_success
    FROM fact_table
    GROUP BY dia
    ORDER BY dia DESC;
    GrowthBook Setup • Fact Table:
    Filter Instagram connections
    • Filters: ▪︎
    event_name = "connection_social_network"
    ▪︎
    socialNetworkName = "facebook"
    ▪︎
    return = "error"
    or
    "success"
    ▪︎
    description IS NOT NULL
    • Metric Type: Ratio ◦ Numerator: Count of rows where
    return = "error"
    ◦ Denominator: Total count of rows after experiment exposure • Metric Goal: Decrease metric value 👉 Please refer to the attached screenshots for the exact configuration: • Fact Table SQL setup • Metric configuration • SQL preview from GrowthBook Issue Despite replicating the logic between BigQuery and GrowthBook, the metric result shown inside GrowthBook doesn’t match the output from BigQuery. I suspect it might be related to: • Differences in default time zones • Filter logic timing (e.g.,
    timestamp >= exposure_timestamp
    ) • Unseen row filtering nuances What I Need Help verifying: 1. Whether my metric configuration matches the logic from BigQuery correctly. 2. If there are any known nuances with how GrowthBook processes time ranges or filters that could explain the mismatch. 3. Suggestions to ensure alignment between GB metrics and BQ calculations. Thanks in advance! Let me know if you need more information. Best, Pedro
    s
    • 2
    • 3
  • w

    witty-laptop-42616

    04/25/2025, 7:21 AM
    While not ideal, sometimes in an agency scenario, we're limited to working with a single JavaScript file placed in the <head> of our client's website. Similar to how almost any client-side testing tool would work. Has anyone else been in a situation where they need to implement GrowthBook for experimentation this way? If so, could you share some tips or code examples? My plan so far: 1. In the JS file, load GB JS SDK 2. Define all relevant attributes and anonymous ID logic (using cookies) 3. What's the best call to get the experiment-variant decision? Considering targeting rules, traffic split etc are set on the Experiment in GB 4. Anything else to consider?
    s
    • 2
    • 1
  • b

    billions-motorcycle-11145

    05/06/2025, 5:43 PM
    Hi team Is there a way to extract experimentation split (i.e. user IDs of each variation) from GrowthBook dashboard directly?
    s
    • 2
    • 1
  • l

    late-dentist-52023

    05/07/2025, 12:57 AM
    Pseudo-A/A experiment setup for evaluating metric volatility and helping people see aspects of Type 1, 2, and magnitude errors. I thought others may benefit from this query as it is rather trivial to setup, but I have found it quite useful. We run A/A experiments with some regularity to validate end to end unbiasedness in our experiments. Beyond validation that things are working as intended, those can be really helpful in helping people see that random chance generates significant results on occasion and so becomes a good learning tool. In helping teammates understand the metrics they are using better, I generated a new experiment query in our primary data source that allows for an A/A experiment that only happens in the data warehouse (hence the "Pseudo" prefix as no experiment ever got deployed). The result is I can click "update" as often as I want and, each time I do so, growthbook fakes an "A/A" experiment. Users get randomly assigned, I can use existing dimensions for pseudo-analysis and show people how many primary/secondary/guardrail metrics show up as "significant" with each "experiment" run. Example query in thread
    👍 1
    🙌 1
    h
    b
    • 3
    • 8
  • l

    little-balloon-64875

    05/16/2025, 10:39 AM
    Cross-posting just in case I put in the wrong spot.
    h
    • 2
    • 2
  • l

    lively-tiger-66465

    05/19/2025, 3:39 PM
    Am I correct that the assignment into buckets is using a cookie with a random ID, and that the tracking in GA4 is using the user_pseudo_id , so there is no link between the assigment into buckets and the tracking to GA4. Growthbook can analyse the GA4 tracking by using the experiment_viewed grouped by user_pseudo_id, but will never be able to link it back to a "bucket cookie". Is this correct?
    h
    • 2
    • 6
  • l

    little-balloon-64875

    05/19/2025, 5:05 PM
    We're just about to end a ~14-day test with 11 variants (plus Control), and today got alerted to a traffic mismatch. This hasn't shown up on any other tests before and the traffic variance for experiments that have only a few variants is always within an acceptable amount, so it's hard to know if this is an implementation thing or something else. The only advice in the docs is to "review implementation" but that doesn't really get me anywhere. It looks like from 1-12 it slowly goes down in traffic, but it's not consistent, and there's no real way to understand why some would have lower traffic and others wouldn't. Just not sure where to start looking on this one.
    f
    h
    • 3
    • 10
  • w

    witty-laptop-42616

    06/03/2025, 6:56 AM
    We have a fresh GB setup using JavaScript SDK and GA4/BQ as the data source. Just finished an A/A test, and it shows a significant lift for all 3 key metrics for V1. No SRM or other anomalies. Looking into the segments, the only thing I noticed was that while V1 has noticeably more iOS users, Control has more Windows users. Any ideas what could be the problem here? Definitely running a new A/A test, but just thought maybe someone has been here before.
    h
    • 2
    • 3
  • p

    polite-rainbow-93528

    06/13/2025, 12:55 PM
    We have run into the situation where experiment is disabled on production on growthbook, but new users are still getting assigned to the variant. The experiment has been stopped an hour ago - even the toggle for the feature is off for production. Yet, every minute between 4 and 13 users get assigned to the variant and some of them are first-time users. Any ideas what might be causing that?
    s
    f
    • 3
    • 3
  • p

    purple-art-11901

    06/23/2025, 7:16 PM
    Hey Growthbook team! I would like some help understanding why our 1st bandits experiments are not capturing data. Our exploratory stage lasted 3 days and we have been running a couple of experiments like these for almost 1 month with the same issue. I can see traffic but no data coming from our main decision metric. I have checked w/other data sources and I can confirm that this is not reflective of what's happening (the decision metric is moving for those exposed). It's our 1st time running this type of experiments so would be great to have some guidance, pls!
    d
    h
    • 3
    • 12
  • n

    numerous-machine-2736

    07/10/2025, 9:05 PM
    Hello everyone. We love GrowthBook but I don't have a lot of confidence with our current setup. We are getting inconsistent test results. Is there a company or developer you guys could recommend to help us review our setup?
    👀 1
    f
    • 2
    • 3
  • c

    creamy-breakfast-20482

    07/22/2025, 1:16 PM
    Hey there! My team would like to run a non-inferiority test using GrowthBook but I'm having trouble setting this up. Is this type of experiment supported by the platform?
    👀 1
    s
    • 2
    • 3
  • l

    late-ambulance-66508

    07/27/2025, 10:21 AM
    Hi! I have a request to adjust the retention metric calculation. Currently, the denominator includes all users, which I believe is incorrect for longer-term metrics. Take Week 1 Retention, for example. During the first 7 days after the experiment starts, the metric is 0 divided by the total number of users, and only gradually starts increasing afterward. This creates a misleading picture, as the metric naturally grows over time. I think the denominator should include only users who had a chance to reach 7+ days in the app — in other words, those whose exposure date allows for observing their Week 1 behavior. This way, we can avoid the issue of the metric artificially inflating throughout the experiment just due to time passing.
    • 1
    • 1
  • l

    late-ambulance-66508

    07/28/2025, 8:42 AM
    Another feature request: it would be convenient to be able to overwrite the timestamp value for a specific metric. We have several datetime fields in the fact table that can be used as primary fields depending on the metric.
    s
    • 2
    • 3