Kallin Nagelberg
10/20/2025, 5:26 PMworkloadLauncher.extraEnv, but I'm not certain.
***Questions:***
1. Is workloadLauncher.extraEnv the correct place for these variables in Helm v2?
2. Is there newer documentation or a different approach I should be using instead?
3. Would it be possible to update the docs to show the full values.yaml structure?
Currently using:
- Airbyte 2.0 with Helm v2 charts
- K3s cluster
- RDS Postgres → Snowflake connection
Thanks for any guidance! The product is great, just trying to get the resource configuration sorted. 🙏
Thanks!kapa.ai
10/20/2025, 5:26 PMworkloadLauncher.extraEnv the correct place for these variables in Helm v2? No, for Helm v2 charts, you should not use workloadLauncher.extraEnv for the main resource variables. In Helm v2, many environment variables that were previously set via extraEnv now have dedicated fields in the values.yaml file. For resource limits like memory and CPU, you should use the new structured fields under `global.workloads.resources.mainContainer`: yaml global: workloads: resources: mainContainer: memory: request: 4Gi limit: 4Gi cpu: request: 1 limit: 2 This is the recommended approach for instance-wide resource settings in Helm v2. The documentation explicitly maps the old environment variables to these new fields: - JOB_MAIN_CONTAINER_MEMORY_REQUEST → global.workloads.resources.mainContainer.memory.request - JOB_MAIN_CONTAINER_MEMORY_LIMIT → global.workloads.resources.mainContainer.memory.limit - (and similarly for CPU) [See mapping table](https://docs.airbyte.com/platform/1.8/enterprise-setup/chart-v2-enterprise#update-your-valuesyaml-file) If you need to set resource requirements for specific job types (like check, discover, spec), you may need to use the corresponding fields under global.jobs.kube.scheduling or, if no direct mapping exists, use extraEnv as a fallback for those specific cases. --- ### 2. Is there newer documentation or a different approach I should be using instead? Yes, the newer approach is to use the structured fields in values.yaml as described above. The documentation for Helm v2 charts (Airbyte 2.0+) provides a full mapping of environment variables to their new locations in the values file. This is a change from Helm v1, where extraEnv was commonly used. [See the full mapping and examples](https://docs.airbyte.com/platform/1.8/enterprise-setup/chart-v2-enterprise#update-your-valuesyaml-file) --- ### 3. Would it be possible to update the docs to show the full values.yaml structure? The documentation does provide examples of the new structure, but it could be clearer. Here’s a minimal example for setting instance-wide memory for all connector pods in Helm v2: yaml global: workloads: resources: mainContainer: memory: request: 4Gi limit: 4Gi If you need to set job-type-specific resources (e.g., for check/discover/spec jobs), you may need to use the corresponding fields under global.jobs.kube.scheduling or use extraEnv for those specific variables if no direct mapping exists. [See example and mapping](https://docs.airbyte.com/platform/1.8/enterprise-setup/chart-v2-enterprise#update-your-valuesyaml-file) --- Summary: - Use global.workloads.resources.mainContainer.memory.request and .limit for instance-wide settings in Helm v2. - Only use extraEnv for variables that do not have a direct mapping in Helm v2. - The documentation mapping table is your best reference for where to put each variable. If you follow this structure, your connector pods (including your Postgres source) should get the increased memory allocation you need. --- If you have a specific job type (like check/discover/spec) that still OOMs, you may need to set those job-type-specific variables as well, either via the new structured fields or, if not available, via extraEnv as a fallback. Let me know if you need a more detailed example for job-type-specific settings! --- References: - [Upgrade to Helm chart V2 (Self-Managed Enterprise) – Update your values.yaml file](https://docs.airbyte.com/platform/1.8/enterprise-setup/chart-v2-enterprise#update-your-valuesyaml-file) - [Configuring Connector Resources](https://docs.airbyte.com/platform/operator-guides/configuring-connector-resources)Tom Sweeting
10/20/2025, 5:30 PMglobal:
workloads:
resources:
mainContainer:
cpu:
request: "2"
limit: "4"
memory:
request: "6Gi"
limit: "122i"
For reference I found this in the documentation in the “migrating helm charts v1 -> v2” guideTom Sweeting
10/20/2025, 5:30 PMKallin Nagelberg
10/20/2025, 5:40 PMTom Sweeting
10/20/2025, 5:43 PMTom Sweeting
10/20/2025, 5:45 PMworker:
maxSyncWorkers: "4"
That key goes at the top level of the values.yaml. In my case I used this to play with having less workers with more resources vs/ more workers with less.
Im leaning towards using less workers + more resources per workers in my case.Kallin Nagelberg
10/20/2025, 5:48 PMKallin Nagelberg
10/20/2025, 5:52 PMi.a.w.l.p.KubePodClient(launchReplication):100 - [source] image: airbyte/source-postgres:3.7.0 resources: ResourceRequirements(claims=[], limits={memory=2Gi, cpu=2}, requests={memory=1Gi, cpu=1}, additionalProperties={})
ie using those low defaults.
and the configmap shows that the values have been applied:
kubectl get configmap airbyte-airbyte-env -n airbyte-v2 -o yaml | grep -A 1 "JOB_MAIN_CONTAINER"
CHECK_JOB_MAIN_CONTAINER_CPU_LIMIT: ""
CHECK_JOB_MAIN_CONTAINER_CPU_REQUEST: ""
CHECK_JOB_MAIN_CONTAINER_MEMORY_LIMIT: ""
CHECK_JOB_MAIN_CONTAINER_MEMORY_REQUEST: ""
CLUSTER_TYPE: hybrid
--
DISCOVER_JOB_MAIN_CONTAINER_CPU_LIMIT: ""
DISCOVER_JOB_MAIN_CONTAINER_CPU_REQUEST: ""
DISCOVER_JOB_MAIN_CONTAINER_MEMORY_LIMIT: ""
DISCOVER_JOB_MAIN_CONTAINER_MEMORY_REQUEST: ""
DISCOVER_REFRESH_WINDOW_MINUTES: ""
--
JOB_MAIN_CONTAINER_CPU_LIMIT: "4"
JOB_MAIN_CONTAINER_CPU_REQUEST: "2"
JOB_MAIN_CONTAINER_MEMORY_LIMIT: 8Gi
JOB_MAIN_CONTAINER_MEMORY_REQUEST: 2Gi
JOB_SOURCE_DECLARATIVE_MANIFEST_KUBE_NODE_SELECTORS: ""
kind of at a losss here 🤷Tom Sweeting
10/20/2025, 5:58 PMhelm upgrade
I found that running this afterwards helps to guarantee the new limits apply instantly
kubectl rollout restart deployment -n airbyte-v2Kallin Nagelberg
10/20/2025, 5:58 PMresources:
useConnectorResourceDefaults: true
to false? I am getting some guidance that maybe that is enforcing connector defaults that wont be overridden.. then I found this issue.Kallin Nagelberg
10/20/2025, 5:58 PMKallin Nagelberg
10/20/2025, 5:59 PMTom Sweeting
10/20/2025, 6:01 PMkubectl get pods -n airbyte-v2
And then pick out an example job (after starting a sync attempt) and run something like this:
kubectl get pod replication-job-XXXX-attempt-0 -n airbyte-v2 -o yaml | grep resources: -A6Kallin Nagelberg
10/20/2025, 6:03 PMTom Sweeting
10/20/2025, 6:05 PMabctl before but switched away from that myself the other day so I could get a little closer to the config/logsKallin Nagelberg
10/20/2025, 6:05 PMTom Sweeting
10/20/2025, 6:06 PMTom Sweeting
10/20/2025, 6:06 PMKallin Nagelberg
10/20/2025, 6:06 PMKallin Nagelberg
10/20/2025, 6:07 PMTom Sweeting
10/20/2025, 6:08 PMKallin Nagelberg
10/20/2025, 6:08 PMKallin Nagelberg
10/20/2025, 6:09 PMTom Sweeting
10/20/2025, 6:12 PMKallin Nagelberg
10/20/2025, 6:17 PMresources:
useConnectorResourceDefaults: true
? I wonder if there's a specific destination default that is being set by that, and then cannot be overwritten by lower precedence level settings..Kallin Nagelberg
10/20/2025, 6:17 PMTom Sweeting
10/20/2025, 6:19 PMTom Sweeting
10/20/2025, 6:22 PMKallin Nagelberg
10/20/2025, 6:46 PMTom Sweeting
10/20/2025, 7:15 PMTom Sweeting
10/20/2025, 7:23 PMINFO pool-5-thread-1 i.a.c.i.d.a.DetectStreamToFlush(getNextStreamToFlush):119 flushing: trigger info: ...
That end like this:
... time trigger: false , size trigger: true current threshold b: 25 MB, queue size b: 2.24 GB, penalty b: 0 bytes, after penalty b: 2.24 GB
Its always the same where I have one particular thread that is ever expanding until I hit the OOM error. The log seems to suggest that is supposed to be flushing when it reaches 25MB, yet it never does 🤔Kallin Nagelberg
10/20/2025, 7:28 PMTom Sweeting
10/20/2025, 7:38 PMKallin Nagelberg
10/20/2025, 7:41 PMTom Sweeting
10/20/2025, 7:41 PMTom Sweeting
10/20/2025, 7:42 PMKallin Nagelberg
10/20/2025, 7:50 PMTom Sweeting
10/20/2025, 7:53 PMKallin Nagelberg
10/20/2025, 7:53 PMTom Sweeting
10/20/2025, 8:00 PMKallin Nagelberg
10/20/2025, 8:04 PMTom Sweeting
10/21/2025, 2:50 PMKallin Nagelberg
10/21/2025, 3:21 PMTom Sweeting
10/21/2025, 3:28 PMdestination INFO type-and-dedupe i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):65 executing query within transaction: .....
The query to copy the data to the final table takes a long time.Kallin Nagelberg
10/21/2025, 3:29 PMKallin Nagelberg
10/21/2025, 3:30 PMTom Sweeting
10/21/2025, 3:33 PMTom Sweeting
10/21/2025, 3:34 PMKallin Nagelberg
10/21/2025, 3:36 PMTom Sweeting
10/21/2025, 3:38 PMTom Sweeting
10/21/2025, 3:41 PMKallin Nagelberg
10/21/2025, 3:42 PM