Hey guys, I setup the airbyte open source version ...
# ask-community-for-troubleshooting
b
Hey guys, I setup the airbyte open source version and im running into a issue with from db2 to snowflake is stalling after 21 million records and mssql to snowflake is stalling at 4 million. I checked the logs for db2 and im only seeing infor not any warnings. Let me know if i can provide any logs or other info. Thanks!
c
Did you check the replication pod logs (not the job log)? A similar issue happened to me and checking the pod logs I was able to see that the replication job ran out of memory. I updated the connection resource requirements memory limit in the db and it fixed my problem
b
how do i check the pod logs?
c
connecting to the server where you deployed airbyte
b
im in the linux server and its running docker
c
did you install with abctl ? it's how I installed it on my server, so I'm able to give you the command I use, but no idea otherwise
b
yes i used abctl
it created a Docker image so im wondering if i can recreate it with more memory
c
check the running pods
docker exec -it airbyte-abctl-control-plane kubectl get pods -n airbyte-abctl
check logs from a specific pod (in your case very likely the replication pod of the above connector)
docker exec -it airbyte-abctl-control-plane kubectl top pod <name_of_pod_that_has_issues> -n airbyte
you will see at the end of the logs if that connector crashes because it is running out of memory
b
MobaXterm_qonJlGNW1f.png
c
yeap, these are all the services/pods lists needed for airbyte to run
b
i tried checking a few and i get error: Metrics API not available
c
in theory, if your faulty job is running, you should have also replication connectors pods
you can try this command for logs:
Copy code
docker exec -it airbyte-abctl-control-plane kubectl logs -n airbyte-abctl <replication-job-of-the-problematic-connector> --tail=300
b
so i been testing with the docker stats and found something pretty interesting i think its running out of memory