Andrew Pham
11/04/2025, 9:31 PMkapa.ai
11/04/2025, 9:31 PMAndrew Pham
11/04/2025, 10:13 PMkapa.ai
11/04/2025, 10:14 PMkapa.ai
11/04/2025, 10:14 PMJOB_MAIN_CONTAINER_MEMORY_REQUEST in .env](https://github.com/airbytehq/airbyte/issues/30814). - In some cases, users found that after updating values.yaml, they needed to manually delete worker pods for the new resource settings to take effect. This suggests that a rolling restart or manual intervention may be required after changing resource configurations [Sync jobs do not appear to respect CPU / memory requests and limits](https://github.com/airbytehq/airbyte/issues/42921). - There is also a known issue where resource requirements set at the connector definition level (in the actor_definition table) are ignored unless a specific environment variable is set. For Kubernetes Helm deployments, setting the environment variable CONNECTOR_SPECIFIC_RESOURCE_DEFAULTS_ENABLED: "true" in your Helm values.yaml may be necessary for Airbyte to respect connector-specific resource requirements [Syncs don't respect the resource_requirements set in the actor_definition table in the database in latest Airbyte](https://github.com/airbytehq/airbyte/issues/38178). - The documentation confirms that resource requirements can be set at multiple levels (instance-wide, connector definition, connector-specific, connection-specific), but in practice, there have been bugs where only the instance-wide defaults are used, and more specific settings are ignored [Configuring Connector Resources](https://docs.airbyte.com/platform/next/operator-guides/configuring-connector-resources). Summary: It is a known issue that resource limits set in values.yaml are sometimes not respected by Airbyte jobs, depending on the version and deployment method. Workarounds include restarting worker pods after changing resource settings and, for Helm/Kubernetes deployments, setting CONNECTOR_SPECIFIC_RESOURCE_DEFAULTS_ENABLED: "true" to enable connector-specific resource limits. If these steps do not resolve the issue, it may be a bug in your Airbyte version, and upgrading to the latest version or following up on the relevant GitHub issues is recommended. If you need more specific guidance, please provide your Airbyte version, deployment method, and a sample of your values.yaml resource configuration.Andrew Pham
11/05/2025, 10:05 AMjobs:
resources:
limits:
cpu: 250m
memory: 500m
requests:
cpu: 75m
memory: 150mkapa.ai
11/05/2025, 10:06 AMkapa.ai
11/05/2025, 10:06 AMvalues.yaml file not being respected by Airbyte jobs, especially when deploying with Helm or abctl. Several users have reported that even after specifying resource limits and requests (like in your example), Airbyte jobs may ignore these settings and use defaults or unexpected values instead. Key points from the sources: - Resource settings may be ignored: Users have observed that settings in values.yaml for jobs.resources are sometimes not applied to job pods. This has been reported in multiple Airbyte versions, including recent ones. For example, jobs may still request 1 CPU and 1Gi memory even if lower values are set in the config file. This is a known issue and has been discussed in several GitHub issues [Sync jobs do not appear to respect CPU / memory requests and limits](https://github.com/airbytehq/airbyte/issues/42921), [Cannot use Airbyte on Kubernetes due to OOM when setting up connector](https://github.com/airbytehq/airbyte/issues/48816), [Airbyte is ignoring JOB_MAIN_CONTAINER_MEMORY_REQUEST in .env](https://github.com/airbytehq/airbyte/issues/30814). - Manual intervention may be required: In some cases, after updating values.yaml, users needed to manually restart or delete worker pods (such as airbyte-workload-launcher) for the new resource settings to take effect. This is because the environment variables are loaded at pod startup, and changes to the config map or values file are not automatically propagated to running pods [Sync jobs do not appear to respect CPU / memory requests and limits](https://github.com/airbytehq/airbyte/issues/42921), [Node selector and tolerations for job](https://github.com/airbytehq/airbyte/issues/10378). - abctl and Helm chart issues: There are open issues specifically mentioning that the Helm chart (and by extension, abctl which uses it) is broken in certain versions, including 0.30.1 and possibly 2.0.0, with respect to environment variable propagation and resource settings [Helm chart broken with abctl 0.30.1](https://github.com/airbytehq/airbyte/issues/65129). - Multiple environment variables may be needed: For full control, you may need to set not just the generic JOB_MAIN_CONTAINER_* variables, but also job-type-specific variables (e.g., CHECK_JOB_MAIN_CONTAINER_MEMORY_LIMIT, DISCOVER_JOB_MAIN_CONTAINER_MEMORY_LIMIT, etc.) in your values.yaml or environment configuration [Cannot use Airbyte on Kubernetes due to OOM when setting up connector](https://github.com/airbytehq/airbyte/issues/48816). What you can try: 1. After changing your values.yaml, manually restart the airbyte-workload-launcher deployment to ensure new environment variables are picked up. 2. Consider explicitly setting all relevant job-type-specific resource variables if you need fine-grained control. 3. If the issue persists, it may be a bug in the version you are using. Check for updates or open issues related to resource configuration in Airbyte 2.0.0 and abctl. Summary: Your configuration is correct, but due to known issues, Airbyte may not respect these settings unless you restart the relevant pods and possibly set additional environment variables. If these steps do not resolve the issue, it is likely a bug in the current version or deployment method you are using [Helm chart broken with abctl 0.30.1](https://github.com/airbytehq/airbyte/issues/65129), [Node selector and tolerations for job](https://github.com/airbytehq/airbyte/issues/10378), [Cannot use Airbyte on Kubernetes due to OOM when setting up connector](https://github.com/airbytehq/airbyte/issues/48816). If you need more targeted help, please provide logs from your job pods and confirm if you have tried restarting the workload launcher after updating your configuration.