hi all, last week we upgraded from V1.6 to V2 (OSS...
# ask-community-for-troubleshooting
a
hi all, last week we upgraded from V1.6 to V2 (OSS version, abctl installation). since then, one of our syncs fails with "Broken pipe". we checked all infrastructure, and we have enough recources, we eavn enlagred our server resources without luck. we are trying to sync large table (initial sync 300 million records, incremental sync ~3 million records). the sync is from MSSQL(source) to BQ(destination) and it worked with no problem on V1.6. there is no helpful information in logs and Kapa.ai didn't help us. is it just us? is it known issue? how can we identify the root cause and solve it? Thanks in advance !
k
Based on the knowledge sources, you're experiencing a known issue that has affected multiple users, particularly with large table syncs after upgrading Airbyte versions. ## What the Knowledge Sources Show The "Broken pipe" error (
java.io.IOException: Broken pipe
) appears in several reported issues with similar patterns to yours: 1. Large table syncs are particularly affected - Multiple users report issues when syncing tables with millions of records. One case mentions [missing rows when syncing large MySQL tables](https://github.com/airbytehq/airbyte/issues/53614), where users syncing 236+ million records experienced similar problems. 2. abctl installation concerns - In the [MySQL to Snowflake sync issue](https://github.com/airbytehq/airbyte/issues/53614), a user specifically noted: "After some testing, I believe the issue may be in the abctl setup. It seems like abctl is more suitable for prototyping rather than handling large scale data transfers. I'd recommend trying the Docker or Kubernetes (kubectl) installation instead." 3. The error pattern matches yours - The stacktrace shows the same
io.airbyte.workers.internal.exception.DestinationException: Destination process message delivery failed
followed by
Caused by: java.io.IOException: Broken pipe
in multiple issues including [source-snowflake](https://github.com/airbytehq/airbyte/issues/56456) and [Typesense destination](https://github.com/airbytehq/airbyte/issues/50396). ## Potential Solutions Based on the knowledge sources: 1. Consider alternative deployment methods - Users have reported better success with Docker or Kubernetes installations instead of abctl for large-scale data transfers. 2. Resource allocation - While you've already increased resources, one user in the [Temporal Scheduler issue](https://discuss.airbyte.io/t/1293) found that adjusting
MAX_SYNC_WORKERS
and
MAX_CHECK_WORKERS
to equal values helped with sync stability. 3. Split the sync - In a [MySQL to BigQuery case](https://discuss.airbyte.io/t/2656), splitting large table syncs into multiple smaller connections was suggested as a workaround. Unfortunately, the knowledge sources don't contain specific information about a fix for this issue in v2 with abctl installations. You may want to consider downgrading temporarily, as one user successfully [downgraded to 1.6.4](https://github.com/airbytehq/airbyte/issues/62463) using the
abctl --chart-version
flag to resolve similar networking issues.
h
Hi Aviad, on top of what Kappa ai was saying about splitting the syncs/streams into smaller sizes, you may want to also check the connector versions to see if they are also up to date.
a
Hi Henry, First thing I've checked is the connector version. I didn't find how to split mssql/bq streams into smaller size. I even created a connection only with the big table and it keep failing with same behavior. Any ideas?
h
splitting the streams into smaller sizes would be to edit the amount of schemas you use with the connection, for example going from 10 schemas down to 5. This would be to see if the issue is related to the size of the data being moved
a
I've done this already, i set only one schema, in the connection