Richard Gao
11/02/2025, 2:44 AMDevesh Verma
11/03/2025, 7:01 AMMuhammad Nauman
11/03/2025, 10:58 AMMountVolume.SetUp failed for volume "gcs-log-creds-volume" : references non-existent secret key: GOOGLE_APPLICATION_CREDENTIALS_JSON
my values.yaml is
global:
edition: community
#airbyteUrl: <https://airbyte.brkarlsen.no>
database:
secretName: database-secret
host: "10.61.80.33"
port: 5432
name: "airbyte" # Previously `database`
userSecretKey: "DATABASE_USER"
passwordSecretKey: "DATABASE_PASSWORD" # Previously `secretKey`
storage:
storageSecretName: gcs-log-creds # Previously `storageSecretName`
type: gcs # Change "GCS" to lowercase
secretName: gcs-log-creds # Previously `storageSecretName`
bucket:
log: brk-airbytev2
state: brk-airbytev2
workloadOutput: brk-airbytev2
activityPayload: brk-airbytev2
gcs:
projectId: brk-analytics
#credentialsJson: | __CREDENTIALS__
credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
workloads:
containerOrchestrator:
secretName: gcs-log-creds
secretMountPath: /secrets/gcs-log-creds
postgresql:
enabled: false
worker:
redinessProbe:
enabled: false
livnessProbe:
enabled: false
webapp:
enabled: false
and the snip from secrets is
target:
name: gcs-log-creds # The name of the Secret resource that will be created in the cluster.
data:
- secretKey: gcp.json # The key of the secret in the secret resource.
remoteRef:
key: airbyte-sa # The key of the secret in the secret manager.
#property: gcp.json
could anybody who know how to solve this problem, 🙂Vasil Boshnakov
11/03/2025, 11:13 AMFull refresh | Overwrite, if we try to do a Incremental | Append + Deduped our connection throws the following error: LEGACY states are deprecated.
{
"failureOrigin": "replication",
"internalMessage": "LEGACY states are deprecated.",
"externalMessage": "Something went wrong during replication",
"metadata": {
"attemptNumber": 4,
"jobId": 1
},
"stacktrace": "java.lang.IllegalArgumentException: LEGACY states are deprecated.\n\tat io.airbyte.container.orchestrator.bookkeeping.ParallelStreamStatsTracker.getEmittedCountForCurrentState(ParallelStreamStatsTracker.kt:193)\n\tat io.airbyte.container.orchestrator.worker.state.StateEnricher.enrich(StateEnricher.kt:38)\n\tat io.airbyte.container.orchestrator.worker.ReplicationWorkerHelper.processMessageFromSource(ReplicationWorkerHelper.kt:324)\n\tat io.airbyte.container.orchestrator.worker.MessageProcessor.run(ReplicationTask.kt:158)\n\tat io.airbyte.container.orchestrator.worker.MessageProcessor$run$1.invokeSuspend(ReplicationTask.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n",
"timestamp": 1762164405299
}Pranay S
11/03/2025, 12:57 PMJorge Gomes
11/04/2025, 12:11 AMusers stream includes the profile object (with profile.fields), but the profile.fields object is always empty in the data extracted via Airbyte.
When I call the Slack API directly using the same token:
curl -i -H "Authorization: Bearer xoxb-XXXXXXX" \
"<https://slack.com/api/users.profile.get?user=XXXXXXX>"
the response correctly includes populated profile.fields, for example:
"fields": {
"XfQQTSFF38": {"value": "Sales Team", "alt": ""},
"XfQEPSBBTK": {"value": "Senior Key Account Manager", "alt": ""}
}
I’ve confirmed:
• The token has users.profile:read.
Could you please help me here?
Thank you!Sivarama Krishnan
11/04/2025, 4:24 AMAndrew Pham
11/04/2025, 9:31 PMData Analytics
11/04/2025, 10:08 PMJimmy Phommarath
11/05/2025, 10:17 AMabctl
Volumes generated by my old version are always here but when I run the new install, it creates a new volume instead of using old... How can I set it ?
Thanks in advance ! 🙂Aviad Deri
11/05/2025, 12:23 PMaidatum
11/05/2025, 1:34 PMPrithvi Maram
11/05/2025, 2:47 PMIevgeniia PRYTULA
11/05/2025, 3:08 PMSaved offset is not valid. Please reset the connection, and then increase oplog retention and/or increase sync frequency.I can’t simply reset the connection, since some of the streams are incremental - resetting would cause significant data loss and would be very difficult to recover. Do you have any advice on how to restart this connection without wiping the existing data?
Oscar Della Casa
11/05/2025, 3:23 PM/api/oauth/access_token redirect loops) even when using SessionTokenAuthenticator or BearerAuthenticator without declarative OAuth enabled, making it impossible to create sources with manual token input for custom connectors that only need token refresh, not full OAuth flows.
The connector works fine in the builder area when testing but it is impossible once published to add it as a source due to the problem mentioned above.
As anyone encountered this before, and fixed it? ThxFabian Boerner
11/05/2025, 10:51 PMHarsh Kumar
11/06/2025, 5:27 AMDavid Aichelin
11/06/2025, 9:50 AMSlackbot
11/06/2025, 1:20 PMSlackbot
11/06/2025, 1:24 PMPragyash Barman
11/06/2025, 2:04 PMairbyte-v2/airbyte 2.0.19 and ran into two blockers:
• The chart pulls airbyte/webapp:2.0.1 from Docker Hub, but that tag doesn’t exist:
Warning Failed 4m33s (x5 over 7m24s) kubelet Failed to pull image "airbyte/webapp:2.0.1": rpc error: code = NotFound desc = failed to pull and unpack image "<http://docker.io/airbyte/webapp:2.0.1|docker.io/airbyte/webapp:2.0.1>": failed to resolve reference
• If we pin the webapp to the latest Docker Hub tag 1.7.8 so it starts, the UI blows up on the new destination/ new connection page with a TypeError:
stacktrace: eCt/</n<@https://.../assets/core-bxruo5x4p5.js:388:105190
How are other OSS users pulling the 2.x images and is there a known workaround for the UI error if we have to stay on 1.7.x?
Any guidance would be appreciated—thanks!Slackbot
11/06/2025, 4:19 PMHarsh Panchal
11/06/2025, 5:11 PMLucas Chies
11/06/2025, 8:45 PMBryan Meyerovich
11/06/2025, 9:50 PMMike Braden
11/06/2025, 9:54 PMairbyte-config-secret):
storage:
secretName: "airbyte-config-secrets"
# -- The storage backend type. Supports s3, gcs, azure, minio (default)
type: gcs
# Minio
#minio:
# accessKeyId: minio
# secretAccessKey: minio123
bucket:
log: airbyte-bucket-appsci-ld-dev
auditLogging: airbyte-bucket-appsci-ld-dev
state: airbyte-bucket-appsci-ld-dev
workloadOutput: airbyte-bucket-appsci-ld-dev
activityPayload: airbyte-bucket-appsci-ld-dev
# GCS
gcs:
projectId: appsci-ld-vc
credentialsJsonSecretKey: gcp.json
credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
But when I try to re-test a source, the source-declarative-manifest pod fails because the container-sidecar container seems to not have the gcp.json file that is successfully mounted in other deployments in the /secrets/gcs-log-creds/gcp.json location
"Exception in thread "main" io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.commons.storage.GcsStorageClient]: /secrets/gcs-log-creds/gcp.json
[...]
<file not found later in the trace>
Am I missing something? It seems like the source-declarative-manifest and connector-sidecar yaml has the GOOGLE_APPLICATION_CREDENTIALS set correctly but is not actually mounting the file from the secret in that location. Is something else supposed to mount the file in the shared filesystem for the sidecar container?Shrey Gupta
11/07/2025, 12:25 AM森亮介
11/07/2025, 5:23 AMabctl).
I am synchronizing CSV files from S3, but it failed to detect a schema change. I have confirmed directly in the source data that a column has been added. Manually refreshing the schema from the UI also did not detect the change.
Has a similar issue been reported? Also, is there a way to work around this?Ruy Araujo
11/07/2025, 7:42 AMPool queue size: 0, Active threads: 0
I noticed that the error occurs when there are several large tables selected, when only small tables are selected, it does not happen. There are even cases where a single table enters this loop.
I have already updated the source and destination connector versions and completely reinstalled Airbyte, but the error persists.
Current Configuration
• Service: Google Compute Engine
• Machine type: c2-standard-8 (8 vCPUs, 32 GB Memory)
• Disk Size: 100GB
• Airbyte Version: 2.0.19
Connector Versions:
• source-bigquery: 0.4.4
• destination-mssql: 2.2.14
• destination-postgres: 2.4.7
Logs for BigQuery > MS SQL
Logs for BigQuery > Postgres
``````Ruy Araujo
11/07/2025, 7:46 AMPool queue size: 0, Active threads: 0
I noticed that the error occurs when there are several large tables selected, when only small tables are selected, it does not happen. There are even cases where a single table enters this loop.
I have already updated the source and destination connector versions and completely reinstalled Airbyte, but the error persists.
Current Configuration
• Service: Google Compute Engine
• Machine type: c2-standard-8 (8 vCPUs, 32 GB Memory)
• Disk Size: 100GB
• Airbyte Version: 2.0.19
Connector Versions:
• source-bigquery: 0.4.4
• destination-mssql: 2.2.14
• destination-postgres: 2.4.7
Logs for BigQuery > MS SQL
```
Logs for BigQuery > Postgres
```