<@U04S8PLJWMT> Hi, I'm having t3.large with 2 vcpu...
# ask-ai
m
@kapa.ai Hi, I'm having t3.large with 2 vcpus and 8gb memory and setting the resources in
values.yaml
file, it's getting stuck
Copy code
global:
 jobs:
  resources:
   requests:
    cpu: 500m
    memory: 0.5Gi
   limits:
    cpu: 1
    memory: 1Gi
k
Based on the information in your question and the knowledge sources, it appears you are correctly setting resource requests and limits in your
values.yaml
file for Airbyte on a t3.large instance (2 vCPUs, 8GB RAM):
Copy code
yaml global:   jobs:     resources:       requests:         cpu: 500m         memory: 0.5Gi       limits:         cpu: 1         memory: 1Gi
However, you are experiencing jobs getting stuck. Here are some key points and troubleshooting steps based on the documentation and community discussions: 1. Resource Requests and Limits: Your configuration is within the available resources of a t3.large node. However, if you are running multiple concurrent syncs, each sync can spin up several pods (orchestrator, read, write), which can quickly exhaust available resources and cause jobs to get stuck or remain pending due to insufficient resources on the node. Each concurrent sync requires at least 3 pods, so even a few syncs can saturate your node's capacity. Consider reducing the number of concurrent syncs or increasing node size if you need to run more jobs in parallel. [Scaling Airbyte After Installation](https://docs.airbyte.com/platform/enterprise-setup/scaling-airbyte) 2. Stuck Jobs and Under-Provisioning: If your Airbyte deployment is under-provisioned, you may notice 'stuck jobs' that remain in-progress for long periods, with eventual failures related to unavailable pods. Increasing job CPU and memory limits may help, but you should also monitor your node's actual usage and adjust accordingly. [Scaling Airbyte After Installation](https://docs.airbyte.com/platform/enterprise-setup/scaling-airbyte) 3. Resource Configuration Not Respected: There have been reports that sometimes the resource settings in
values.yaml
are not always respected by Airbyte, especially after upgrades or changes. In some cases, users found that deleting worker pods manually caused the new configuration to take effect. [Sync jobs do not appear to respect CPU / memory requests and limits](https://github.com/airbytehq/airbyte/issues/42921) 4. Best Practices: - Start with the default resource allocations and only increase them if you observe resource-related issues in logs or monitoring. - Use observability tools to confirm if jobs are stuck due to CPU or memory constraints. - If you need to run many syncs, consider increasing the number of worker replicas and/or node size. - If you see jobs stuck in pending state, check if the node has enough available resources for the requested pods. 5. Other Considerations: - If you are using Kubernetes autoscaling, remember that once a pod is scheduled, it will only run on the node it was assigned to, and increasing node size may help if resource contention is the issue. - If you recently changed resource settings, try restarting the Airbyte worker pods to ensure the new settings are applied. If you continue to experience issues, please provide more details about the number of concurrent syncs, the Airbyte version, and any relevant pod logs or error messages. This will help further diagnose the problem. Summary: Your resource settings are reasonable for a single or a few concurrent syncs on a t3.large node, but jobs can get stuck if you exceed the node's capacity or if Airbyte is not applying the resource settings correctly. Monitor your actual resource usage, consider reducing concurrency, and ensure your configuration changes are being picked up by restarting pods if necessary. For more details, see the official documentation on scaling and configuring connector resources: - [Scaling Airbyte After Installation](https://docs.airbyte.com/platform/enterprise-setup/scaling-airbyte) - [Configuring Connector Resources](https://docs.airbyte.com/platform/operator-guides/configuring-connector-resources)