*Is this your first time deploying Airbyte*: No / ...
# troubleshooting
k
Is this your first time deploying Airbyte: No / Yes OS Version / Instance: GKE n1-standard-4 Memory / Disk: 4gb Deployment: Kubernetes Airbyte Version: 0.35.46-alpha Source name/version: Zendesk Support 0.2.0 Destination name/version: Google Cloud Storage (GCS) 0.1.24 Step: On initial sync Description: having issues with the GCS destination parquet files? I can write to csvs fine but not parquets, I have a hunch that it might be the url its using is
s3a://
instead of
gs://
? All default settings except compression codec = SNAPPY
a
Hi @Kemp Po I think the url might be a red-herring here. GCS works with the S3 api, it's why we are using an S3 client to upload files to GCS. Could you please share the error log you have?
k
Hey @Augustin Lafanechere (Airbyte), so this is what I'm seeing at the tail end of the log, could it be that I just have it set up wrong?
Copy code
2022-03-08 15:52:45 destination > 2022-03-08 15:52:45 WARN o.a.h.f.FileSystem(createFileSystem):3418 - Failed to initialize fileystem <s3a://bucket/data/2022_03_08_1646754764563_0.parquet>: java.lang.IllegalArgumentException: bucket
2022-03-08 15:52:45 destination > 2022-03-08 15:52:45 ERROR i.a.i.b.FailureTrackingAirbyteMessageConsumer(start):39 - Exception while starting consumer
2022-03-08 15:52:45 destination > java.lang.IllegalArgumentException: bucket
2022-03-08 15:52:45 destination > 	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145) ~[guava-31.0.1-jre.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.s3a.S3AUtils.propagateBucketOptions(S3AUtils.java:1150) ~[hadoop-aws-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:324) ~[hadoop-aws-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[hadoop-common-3.3.0.jar:?]
2022-03-08 15:52:45 destination > 	at org.apache.parquet.hadoop.util.HadoopOutputFile.fromPath(HadoopOutputFile.java:58) ~[parquet-hadoop-1.12.0.jar:1.12.0]
2022-03-08 15:52:45 destination > 	at io.airbyte.integrations.destination.gcs.parquet.GcsParquetWriter.<init>(GcsParquetWriter.java:67) ~[io.airbyte.airbyte-integrations.connectors-destination-gcs-0.35.36-alpha.jar:?]
2022-03-08 15:52:45 destination > 	at io.airbyte.integrations.destination.gcs.writer.ProductionWriterFactory.create(ProductionWriterFactory.java:47) ~[io.airbyte.airbyte-integrations.connectors-destination-gcs-0.35.36-alpha.jar:?]
2022-03-08 15:52:45 destination > 	at io.airbyte.integrations.destination.gcs.GcsConsumer.startTracked(GcsConsumer.java:61) ~[io.airbyte.airbyte-integrations.connectors-destination-gcs-0.35.36-alpha.jar:?]
m
Can you share the complete logs and edit your post using the Slack Issue Template?
Did you deploy Airbyte for the first time? Or upgrade the connector from a old connection
k
Hey @Marcos Marx (Airbyte), edited the post above
And yes, its the first time I'm using it
h
Hey surely logs look different can you create an issue around this so that team can track this
k