Getting the following error: 0/2 nodes are availab...
# ask-ai
t
Getting the following error: 0/2 nodes are available: 2 Insufficient cpu. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod. Even though cpu utilisation is ~30% on nodes. Some pods do not start until other syncs complete.
u
The error message you're seeing indicates that Kubernetes is unable to schedule your pods due to insufficient CPU resources. This can happen even if CPU utilization is low because Kubernetes reserves a certain amount of CPU for each pod based on the requests and limits specified in the pod's configuration. From the knowledge sources, it appears that some Airbyte users have encountered similar issues. In some cases, the CPU and memory requests and limits set for Airbyte's job containers were found to be excessive, causing scheduling issues on Kubernetes nodes ([source](https://github.com/airbytehq/airbyte/issues/35749)). One user found a workaround by allowing their Kubernetes cluster to auto-scale up during a sync, but noted that this is not a long-term solution, especially if sync jobs are running frequently. Another user suggested that the issue might be resolved by restarting the deployment and running the sync jobs again, as they found that the resource requests and limits specified in the Helm chart's
values.yaml
file were picked up after moving their deployment to another node pool ([source](https://github.com/airbytehq/airbyte/issues/35749)). However, it's also worth noting that there have been reports of sync jobs not respecting the resource requirements set in the
actor_definition
table in the database ([source](https://github.com/airbytehq/airbyte/issues/38178)), and of sidecar containers in sync pods not having resources properly defined for Kubernetes clusters with resource quotas ([source](https://github.com/airbytehq/airbyte/issues/10589)). If you're still encountering this issue, you might want to consider adjusting the CPU and memory requests and limits for your Airbyte job containers, or exploring other ways to manage resource allocation in your Kubernetes cluster. If the issue persists, it would be a good idea to raise it in the Airbyte GitHub repository or discuss it with the Airbyte community on their forum.
l
Hey @Tomasz Wija, got this last week even when my nodes had plenty of resources what helped is to specify default resources/requests and tune connectors that need more individually global: jobs: resources: requests: cpu: 250m memory: 250Mi limits: cpu: 1000m memory: 1Gi