I am running Airbyte 1.8 using helm ,, it was wor...
# ask-community-for-troubleshooting
g
I am running Airbyte 1.8 using helm ,, it was working but now I am getting below error while check_connection "failureOrigin": "airbyte_platform", "externalMessage": "Workload failed, source: airbyte_platform", "internalMessage": "Unable to persist the job Output, check the document store credentials.", While create source is successful
k
The error message you are seeing—"Unable to persist the job Output, check the document store credentials"—indicates that Airbyte is having trouble writing job output to the configured document store (typically S3, GCS, or Azure Blob Storage) in your Kubernetes deployment. This issue is commonly related to misconfigured or missing credentials for your storage backend. Here are the most likely causes and steps to resolve: 1. Check Storage Credentials in Kubernetes Secrets Airbyte expects storage credentials (e.g., S3 access key and secret) to be present in a Kubernetes secret, usually named
airbyte-config-secrets
. The keys should be named
s3-access-key-id
and
s3-secret-access-key
for S3, or the appropriate keys for GCS/Azure. If these are missing, incorrect, or have changed, Airbyte will not be able to write job outputs, resulting in the error you see. See the required configuration for S3 as an example:
Copy code
yaml    global:      storage:        type: "S3"        secretName: airbyte-config-secrets        bucket:          log: airbyte-bucket          state: airbyte-bucket          workloadOutput: airbyte-bucket        s3:          region: "us-east-1" # or your region          authenticationType: credentials          accessKeyId: "<your-access-key-id>"          secretAccessKey: "<your-secret-access-key>"
Make sure the referenced secret exists and contains the correct values. [See storage configuration details](https://docs.airbyte.com/platform/deploying-airbyte/integrations/storage#values) 2. Check for Region Mismatch or Malformed Credentials If you are using S3, ensure the region in your configuration matches the region of your S3 bucket. A mismatch can cause authorization errors, as seen in similar issues: > "The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-west-1'" [See related issue](https://github.com/airbytehq/airbyte/issues/48407) 3. Check for Service Account Issues (GCS/Azure) If you are using GCS or Azure, ensure the service account credentials are correctly mounted and referenced in your
values.yaml
and Kubernetes secrets. 4. Recent Changes or Upgrades If you recently upgraded Airbyte or changed your Helm values, double-check that your secrets and storage configuration were not overwritten or lost. Some users have reported that certain environment variables or secrets can be unset or missing after upgrades, leading to similar errors [see example](https://github.com/airbytehq/airbyte/issues/50979). Next Steps: - Verify your storage backend credentials in the Kubernetes secret. - Ensure your
values.yaml
references the correct secret and keys. - Check for any recent changes to your storage configuration or secrets. - If using S3, confirm the region is correct and matches your bucket. If after checking these you still encounter issues, please provide the relevant section of your
values.yaml
and the output of
kubectl get secret airbyte-config-secrets -o yaml
(redact sensitive info) for further troubleshooting.
g
I am using MinIO
h
This issue could be related to the document store credentials used. I would check your MinIO credentials to ensure that access key, secret key, and the endpoint URL are correctly configured for Airbyte deployment. Confirm the bucket specified in your Airbyte configuration exists in MinIO with the permissions for Airbyte to write data. Make sure all connections are working, any network issues or firewalls that may block communication, etc. If you are using Helm chart V2, ensure your values.yaml file is correctly configured for MinIO. Double-check the global.storage section for any misconfigurations.