Hello all. I can't seem to get GCS log storage to ...
# ask-community-for-troubleshooting
m
Hello all. I can't seem to get GCS log storage to work. I am using the following values (and have already created the
airbyte-config-secret
):
Copy code
storage:
    secretName: "airbyte-config-secrets"
    # -- The storage backend type. Supports s3, gcs, azure, minio (default)
    type: gcs
    # Minio
    #minio:
    #  accessKeyId: minio
    #  secretAccessKey: minio123
    bucket:
      log: airbyte-bucket-appsci-ld-dev
      auditLogging: airbyte-bucket-appsci-ld-dev
      state: airbyte-bucket-appsci-ld-dev
      workloadOutput: airbyte-bucket-appsci-ld-dev
      activityPayload: airbyte-bucket-appsci-ld-dev
    # GCS
    gcs:
      projectId: appsci-ld-vc
      credentialsJsonSecretKey: gcp.json
      credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
But when I try to re-test a source, the source-declarative-manifest pod fails because the container-sidecar container seems to not have the gcp.json file that is successfully mounted in other deployments in the /secrets/gcs-log-creds/gcp.json location
Copy code
"Exception in thread "main" io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.commons.storage.GcsStorageClient]: /secrets/gcs-log-creds/gcp.json
[...]
<file not found later in the trace>
Am I missing something? It seems like the source-declarative-manifest and connector-sidecar yaml has the GOOGLE_APPLICATION_CREDENTIALS set correctly but is not actually mounting the file from the secret in that location. Is something else supposed to mount the file in the shared filesystem for the sidecar container?
k
Based on the knowledge sources, it appears you're encountering a known issue with GCS log storage configuration in Airbyte's Helm chart. The problem is that the GCS credentials secret is not being properly mounted to job pods, including the source-declarative-manifest pods. According to [this GitHub issue](https://github.com/airbytehq/airbyte/issues/65153), there's a workaround that has been confirmed to work. You need to explicitly set the container orchestrator secret configuration in your Helm values:
Copy code
yaml global:   workloads:     containerOrchestrator:       secretName: "airbyte-gcs-log-creds"       secretMountPath: "/secrets/gcs-log-creds"
Important note: Replace "airbyte" in the secret name with your actual Helm release name. So if your release is named "my-release", the secret name would be "my-release-gcs-log-creds". This configuration sets the
CONTAINER_ORCHESTRATOR_SECRET_NAME
and
CONTAINER_ORCHESTRATOR_SECRET_MOUNT_PATH
environment variables, which are needed for the job pods to properly mount the GCS credentials. Additionally, based on [another issue](https://github.com/airbytehq/airbyte/issues/68178), there may be a double base64 encoding problem with GCS credentials. The knowledge sources indicate that when upgrading to Helm V2, you should ensure: 1.
storage.type
is set to
'gcs'
(lowercase) 2.
storage.gcs.credentialsJsonSecretKey
is set to
'gcp.json'
Your configuration appears correct for these settings, so adding the
containerOrchestrator
configuration should resolve your issue.