wooden-farmer-44624
11/07/2025, 5:41 PMFailed to parse custom headers: "Vercel" not defined in [object Object] - 1:10
When I update the headers to manually input the Token for Authorization I get:
{"error":{"code":"forbidden","message":"Not authorized","invalidToken":true}}
When I run the exact same request via Postman, I don't get an invalidToken errorjolly-spring-84802
11/10/2025, 8:16 AMgentle-oil-90501
11/10/2025, 11:19 AM/experiments endpoint request by dateUpdated
Thanks!adorable-bear-66287
11/10/2025, 1:34 PManonymous_id (device_id) from web that can map to customer_id when users log in. Our existing join table handles 1:1 ID mappings across systems perfectly (customer_id ↔ other_id_1 ↔ other_id_2).
The problem: We now need to handle m:1 relationships (multiple anonymous_id → one customer_id). I created a separate join table for anonymous_id ↔ customer_id, but metrics that depend on other IDs (e.g., customer_id ↔ other_id_1 ↔ other_id_2) don't work with this approach.
Question: Should we create one consolidated table with all ID combinations (anonymous_id ↔ customer_id ↔ other_id_1 ↔ other_id_2)? Will GrowthBook handle the necessary grouping/distincting across all these IDs correctly?dazzling-honey-61665
11/11/2025, 9:07 AMable-beach-75525
11/11/2025, 3:50 PMwide-journalist-37256
11/11/2025, 9:13 PMrapid-cat-65267
11/12/2025, 7:45 AMripe-tiger-2213
11/12/2025, 8:35 AMbrief-umbrella-80783
11/12/2025, 10:26 AMdelightful-actor-55865
11/12/2025, 10:47 PMSHOW and `USAGE`/`SELECT` permissions on the necessary Catalogs/Schemas, even if the basic connection works? Any guidance on these points would be hugely appreciated as we finalize the Databricks setup! Thanks a lot!wooden-architect-50910
11/13/2025, 3:34 AMlively-evening-1890
11/13/2025, 8:47 AMquick-apple-64181
11/13/2025, 1:14 PMmysterious-florist-78871
11/14/2025, 10:04 AMhttp://{custom_domain}:3100.
Has anyone successfully hosted the project on Azure and has some pointers? 😅
I feel like Ingress might be the culprit, not allowing me to open up more than 1 http-port. (It only supports multiple TCP ports, afaik)dry-ambulance-7044
11/14/2025, 6:27 PMable-stone-9657
11/15/2025, 4:09 AMgetProviderData() function from @flags-sdk/growthbook doesn't seem to be mapping flag options from experiment variations tied to that flag in growthbook.
There's a discrepancy between two different documentation approaches for next.js + vercel flags explorer:
1. Next.js and Vercel Feature Flags Tutorial - manual implementation of getFlagApiData() that explicitly loops through the GrowthBook payload to extract options from rules, variations, and force values. (older)
2. Next.js SDK (Vercel Flags) Documentation - Uses the built-in getProviderData() function from @flags-sdk/growthbook package, which should handle this automatically. (new)
Issue:
When using the SDK's getProviderData() function as shown in the newer docs, the flags appear in the toolbar with the default as their only option.
Question:
Is this a known limitation of getProviderData(), or did I miss something in the docs?green-church-13845
11/17/2025, 7:32 AMmysterious-florist-78871
11/17/2025, 1:17 PMwitty-rocket-75751
11/18/2025, 6:57 AMfailed to lookup host , and wanted to implement a retry mechanism when initialization fails. GBSDKBuilder provides a failure callback but seems like there is no init retry mechanism build around failure handling, our init logic is written in a singleton pattern so creation new object if init fails doesn't seem viable either, so wanted to understand how can we tackle this problem are there any helpful inbuild methods whcih gb provides for init retry?fresh-pencil-44589
11/18/2025, 3:27 PM2025-11-18T15:22:00.113Z a762f8a1-1df6-4c0b-98aa-686ec37e6ff8 ERROR Unhandled Promise Rejection
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "TypeError: Invalid URL",
"reason": {
"errorType": "TypeError",
"errorMessage": "Invalid URL",
"code": "ERR_INVALID_URL",
"input": "undefined",
"stack": [
"TypeError: Invalid URL",
" at new URL (node:internal/url:825:25)",
" at Ms (/var/task/index.js:31:69858)",
" at Object.w3 [as proxyRequest] (/var/task/index.js:31:75389)",
" at /var/task/index.js:31:65455",
" at Generator.next (<anonymous>)",
" at s (/var/task/index.js:31:64093)"
]
},
"promise": {},
"stack": [
"Runtime.UnhandledPromiseRejection: TypeError: Invalid URL",
" at process.<anonymous> (file:///var/runtime/index.mjs:1448:17)",
" at process.emit (node:events:519:28)",
" at emitUnhandledRejection (node:internal/process/promises:252:13)",
" at throwUnhandledRejectionsMode (node:internal/process/promises:388:19)",
" at processPromiseRejections (node:internal/process/promises:475:17)",
" at process.processTicksAndRejections (node:internal/process/task_queues:106:32)"
]
}
Nothing is growthbook specific in those logs, but I’m guessing it’s coming from something growthbook is doing.
A couple of questions:
1. PROXY_TARGET — I’m pointing this to our S3 bucket that serves the website, which is http, not https — is that okay? If I use https, that’d point back to a cloudfront distro URL and I think it’d go into some kind of infinite redirect loop
2. Is there a way to test this w/o having to redeploy my cloudfront distro and wait for the website to breakbillions-house-96196
11/18/2025, 5:00 PMclever-hair-5481
11/18/2025, 5:30 PMmicroscopic-ocean-92851
11/18/2025, 10:41 PMMetric: Retention
Fact table: Events, Row filter: event_name = 'page_view'
However the lowest time I am able to set is: Event must be at least 1 Minutes after experiment exposure
Which effectively is timestamp >= (exposure_timestamp + '1 minutes')
However, I wish I could use: timestamp > (exposure_timestamp)
Note: not equal
Is this possible with a fact metric?magnificent-furniture-52657
11/19/2025, 12:29 PMdamp-boots-45877
11/20/2025, 5:54 AMdamp-boots-45877
11/20/2025, 6:41 AMmany-dusk-66209
11/21/2025, 1:00 PMancient-morning-80869
11/24/2025, 7:38 PMincalculable-nightfall-35029
11/25/2025, 5:48 AMNumerator = SUM(profit per order), Denominator = Unique Users
2. Mean metric with per user aggregation = SUM(profit per order).
When defining the Mean metric (2), the Growthbook interface says:
The final metric value will be the average per-user value for all users in the experiment. Any user without a matching row will have a value of 0 and will still contribute to this average.So I don't understand the difference between a denominator of "Unique Users" in 1 and "average per-user value" in 2. I would expect these two metrics to be identical. However, on the experiment overview page I can see they have the same numerator value (1.7m), but the Ratio (1) has 5k Unique Users for the Denominator, while Mean (2) has a denominator of 83k. We're doing an A/A test (no difference between the two treatments) and are seeing Growthbook report a statistically significant Relative uplift for the Ratio Metric, but not for the Mean metric. I would like to understand the details of what is happening behind the scenes a bit better.