We’re having some issues with the Airbyte worker, ...
# ask-ai
s
We’re having some issues with the Airbyte worker, as detailed in issue #24269, and am curious if there will be movement on resolving it. I am forced to run multiple workers with aggressive memory limits so they are cycled frequently, to work around the deficiency.
k
A message from kapa.ai
Hmm, I don't know enough to give you a confident answer yet. The issue you mentioned is not available in the information extracts I have. However, I can provide some general guidance on scaling Airbyte workers and managing memory usage. 1. You can customize the memory settings for Java connectors by setting the
JOB_MAIN_CONTAINER_MEMORY_REQUEST
and
JOB_MAIN_CONTAINER_MEMORY_LIMIT
environment variables to custom values. This can help you manage memory usage more effectively [source](https://docs.airbyte.com/operator-guides/scaling-airbyte#memory). 2. You can increase the number of worker replicas to allow more jobs to be processed in parallel. This can be done in the Kubernetes deployment by increasing the number of replicas for the
airbyte-worker
deployment [source](https://docs.airbyte.com/deploying-airbyte/on-kubernetes#increase-job-parallelism). 3. You can also configure the maximum number of each type of worker allowed to run in parallel by setting the environment variables
MAX_SPEC_WORKERS
,
MAX_CHECK_WORKERS
,
MAX_DISCOVERY_WORKERS
, and
MAX_SYNC_WORKERS
[source](https://docs.airbyte.com/understanding-airbyte/jobs#worker-parallization). For more information on scaling Airbyte, you can refer to the [Scaling Airbyte documentation](https://docs.airbyte.com/operator-guides/scaling-airbyte).