https://linen.dev logo
#troubleshooting
Title
# troubleshooting
k

Kyle Cheung

03/17/2022, 6:05 AM
Schema refresh and resync did not do the trick either
Looks like this is only happening to tables with over 30k rows
Some tables replicated 0 rows
Related to issue #11052. Per nightgold:
Seems like the issue is that the connector does not create the stages that snowflakes normally uses in a COPY command to import the datas from s3.
m

Marcos Marx (Airbyte)

03/18/2022, 1:27 AM
@Kyle Cheung if you change version to 0.4.4 of postgres and use internal staging for snowflake this solves the issue?
k

Kyle Cheung

03/18/2022, 2:19 AM
Changing to internal staging solves the issue. Curious when the recommended method changed from S3 staging to Internal Staging? And what the difference between Internal Staging and Snowflake Inserts are
@Marcos Marx (Airbyte)
a

Augustin Lafanechere (Airbyte)

03/18/2022, 2:44 PM
@Kyle Cheung I think that internal staging leverages the official snowflake client that could be more reliable than the two step process that happens for S3 staging. And I'm also under the impression that internal staging is still performing some S3 staging under the hood. But don't take my word for granted 😄
k

Kyle Cheung

03/18/2022, 2:59 PM
thanks Augustin. Any tips on making sure the EC2 vm doesn't crash? Our instance has been crashing quite often this week, is it just a memory issue?
a

Augustin Lafanechere (Airbyte)

03/18/2022, 3:01 PM
Yes probably, did you try upsizing to a t2.xlarge?
2 Views