Hey guys, I am facing a cap on compute when it com...
# ask-community-for-troubleshooting
s
Hey guys, I am facing a cap on compute when it comes to moving large data on a very frequent schedule (5 minutes). One way I am thinking of distributing the load is having multiple airbyte deployments on different instances that share the same postgres DB. I will distribute connections to both places accordingly. Is this a good way to go about it? Will there be conflicts in picking jobs if both airbytes share a DB?
1
a
Hi @Saket Singh I'd rather suggest you to try vertically scaling your current instance if you are using docker-compose and leverage the
MAX_SYNC_WORKERS
and
JOB_MAIN_CONTAINER_MEMORY_REQUEST
to increase the parallelism and memory provided to containers. I'd suggest you to eventually deploy Airbyte on Kubernetes if you want a more controlled distribution. Check out our scaling documentation for more details: https://docs.airbyte.com/operator-guides/scaling-airbyte
👀 1
s
Thanks for your inputs Augustin! Will try that and let you know if it works! I have some more questions: I was reading in the API documentation of Airbyte that on registering connections, there are these parameters passed:
Copy code
"resourceRequirements": {
    "cpu_request": "string",
    "cpu_limit": "string",
    "memory_request": "string",
    "memory_limit": "string"
  }
However, I am unsure if these parameters are passed from the webapp. Additionally, how do these limits apply to source and destination connector? Do they correspond to both connectors?
a
Hey @Saket Singh, do you mind posting this question on our Discourse forum? We're moving the community support to this platform.
1