I use airbyte opensource deployed on kubernetes an...
# ask-ai
a
I use airbyte opensource deployed on kubernetes and it consumes a lot of memory, is there a way to analyze the memory consumption and optimize it?
u
Yes, there are several ways to analyze and optimize memory consumption in Airbyte. 1. Monitor Memory Usage: You can monitor the memory usage of your Airbyte deployment using infrastructure monitoring tools. These tools can help you measure requested and used CPU and memory percentages. If your nodes are under high CPU or memory usage, you may need to scale up your Airbyte deployment or reduce the maximum resource usage by any given connector pod ([source](https://docs.airbyte.com/enterprise-setup/scaling-airbyte#concurrent-syncs)). 2. Set Resource Limits: You can set resource limits for your Airbyte deployment in Kubernetes. This can be done by setting
JOB_MAIN_CONTAINER_MEMORY_REQUEST
and
JOB_MAIN_CONTAINER_MEMORY_LIMIT
environment variables to custom values. This allows you to control the amount of memory that Airbyte can use ([source](https://docs.airbyte.com/operator-guides/scaling-airbyte#memory)). 3. Optimize Sync Jobs: The primary driver of increased resource usage in Airbyte is the number of concurrent syncs running at any given time. Each concurrent sync requires at least 3 additional connector pods to be running at once (
orchestrator
,
read
,
write
). If your deployment of Airbyte is intended to run many concurrent syncs at once, you are likely to require an increased number of instances to run all syncs ([source](https://docs.airbyte.com/enterprise-setup/scaling-airbyte#concurrent-syncs)). 4. Investigate Memory Leaks: There have been reports of memory leaks in Airbyte, particularly with the server and worker gradually increasing their memory usage over time. This is a known issue and is being investigated. Some users have found a temporary workaround by manually triggering the Java Garbage Collector (GC) in the worker container ([source](https://github.com/airbytehq/airbyte/discussions/34483)). Remember, these are all workarounds and the root cause of the issue is still being investigated. It's recommended to follow the Github issues linked above for updates on this issue.