https://linen.dev logo
#troubleshooting
Title
# troubleshooting
j

Jean Sahlberg

03/24/2022, 12:53 PM
Is this your first time deploying Airbyte: Yes OS Version / Instance: MacOS Monterey (M1) Memory / Disk: 16Gb / 1Tb Deployment: Docker Airbyte Version: 0.35.59-alpha Source name/version: Zendesk Support 0.2.2 Destination name/version: Redshift/CSV/JSON (tried all) Step: On sync Description: When extracting data from Zendesk “tickets” the worker just stops, tried multiple times, even using an Intel Mac, have the same problem… It always stops in the same part [Records read: 54000 (208 MB)]. I’ve tested other tables from Zendesk (organizations, users, etc…) and they apparently are working fine. I was able to full extract the tickets table using an old version of Airbyte (0.29.11).
2022-03-24 11:53:49 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 50000 (193 MB)
2022-03-24 11:54:07 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 51000 (197 MB)
2022-03-24 11:54:14 destination > 2022-03-24 11:54:14 INFO i.a.i.d.b.BufferedStreamConsumer(flushQueueToDestination):181 - Flushing buffer: 26201189 bytes
2022-03-24 11:54:14 destination > 2022-03-24 11:54:14 INFO i.a.i.d.b.BufferedStreamConsumer(lambda$flushQueueToDestination$1):185 - Flushing zendesk_tickets: 1705 records
2022-03-24 11:54:14 destination > 2022-03-24 11:54:14 INFO i.a.i.d.r.RedshiftSqlOperations(insertRecordsInternal):43 - actual size of batch: 1705
2022-03-24 11:54:25 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 52000 (200 MB)
2022-03-24 11:54:43 destination > 2022-03-24 11:54:43 INFO i.a.i.d.b.BufferedStreamConsumer(flushQueueToDestination):181 - Flushing buffer: 26208887 bytes
2022-03-24 11:54:43 destination > 2022-03-24 11:54:43 INFO i.a.i.d.b.BufferedStreamConsumer(lambda$flushQueueToDestination$1):185 - Flushing zendesk_tickets: 1583 records
2022-03-24 11:54:43 destination > 2022-03-24 11:54:43 INFO i.a.i.d.r.RedshiftSqlOperations(insertRecordsInternal):43 - actual size of batch: 1583
2022-03-24 11:54:43 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 53000 (204 MB)
2022-03-24 11:55:02 INFO i.a.w.DefaultReplicationWorker(lambda$getReplicationRunnable$5):301 - Records read: 54000 (208 MB)
2022-03-24 11:55:15 destination > 2022-03-24 11:55:15 INFO i.a.i.d.b.BufferedStreamConsumer(flushQueueToDestination):181 - Flushing buffer: 26212538 bytes
2022-03-24 11:55:15 destination > 2022-03-24 11:55:15 INFO i.a.i.d.b.BufferedStreamConsumer(lambda$flushQueueToDestination$1):185 - Flushing zendesk_tickets: 1713 records
2022-03-24 11:55:15 destination > 2022-03-24 11:55:15 INFO i.a.i.d.r.RedshiftSqlOperations(insertRecordsInternal):43 - actual size of batch: 1713
the log just stops there
a

Augustin Lafanechere (Airbyte)

03/24/2022, 4:43 PM
Hi @Jean Sahlberg do you know if the source contains more than 54000 records? I'd like to check if the problem comes from a partial read or a partial write 😄
j

Jean Sahlberg

03/24/2022, 4:51 PM
Hi @Augustin Lafanechere (Airbyte), thanks for your reply. Yes, it contains more than 54000 records, it have over 300k records.
a

Augustin Lafanechere (Airbyte)

03/24/2022, 4:53 PM
Could you check if the container's memory is not full while running the sync (source + worker + destination)?
j

Jean Sahlberg

03/24/2022, 5:02 PM
sure, I will do that and reply back!
no, the container have plenty memory and the sync just halts… now I’m running it on a EC2 machine to make sure is not MacOS the problem…
a

Augustin Lafanechere (Airbyte)

03/24/2022, 7:10 PM
If the sync does not work on the EC2 machine, feel free to open an issue on our repo and share your full sync logs 🙏🏻 We're also moving our community support to discourse, so feel free to continue the discussion there.
j

Jean Sahlberg

03/24/2022, 7:13 PM
ok. thanks
4 Views