I'm running into a situation of ballooning memory ...
# ask-ai
t
I'm running into a situation of ballooning memory usage on connector syncs. It had initially been causing my instance to lock up and get replaced by the auto-scale group and now seems to be causing an individual sync to fail. It's running the MySQL source and S3 Glue destination.
k
A message from kapa.ai
It seems you are experiencing memory issues during the sync process with the MySQL source and S3 Glue destination. One possible reason for this issue is that the Source worker reads up to 10,000 records in memory, which can cause problems for database sources with large row sizes [source](https://docs.airbyte.com/operator-guides/scaling-airbyte#memory). Our Java connectors, like the MySQL source, follow Java's default behavior with container memory and will only use up to 1/4 of the host's allocated memory. If the memory usage goes higher, you may encounter Out-of-Memory exceptions [source](https://docs.airbyte.com/operator-guides/scaling-airbyte#memory). To address this issue, you can try customizing the
JOB_MAIN_CONTAINER_MEMORY_REQUEST
and
JOB_MAIN_CONTAINER_MEMORY_LIMIT
environment variables to allocate more memory for the connector [source](https://docs.airbyte.com/operator-guides/scaling-airbyte#memory). Additionally, you can consider splitting large tables into separate connections to have better control over their failures, as suggested in [this Slack conversation](https://airbytehq.slack.com/archives/C021JANJ6TY/p1670447302983309). However, I don't have enough information to provide a confident solution for the S3 Glue destination. You may want to refer to the [Airbyte documentation](https://docs.airbyte.com/) for more information on handling memory issues with specific connectors.