https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • r

    Richard Gao

    11/02/2025, 2:44 AM
    In our Airbyte the MongoDB source connector is experiencing a critical issue where based on the log - saved resume tokens have timestamps earlier than initial resume tokens, resulting in missing data during CDC synchronization. Any cause as to why? Has anyone encountered this before?
    k
    • 2
    • 16
  • d

    Devesh Verma

    11/03/2025, 7:01 AM
    Hello folks, I have a connector from MongoDB to AWS S3. Now, what I want to is when docs are being moved to s3, I have to hit an external endpoint to send metadata to it. I am exploring solutions that are achievable from airbyte itself.
    k
    • 2
    • 10
  • m

    Muhammad Nauman

    11/03/2025, 10:58 AM
    Hi everyone, I have upgraded airbyte to v2 and am getting issue with
    Copy code
    MountVolume.SetUp failed for volume "gcs-log-creds-volume" : references non-existent secret key: GOOGLE_APPLICATION_CREDENTIALS_JSON
    my values.yaml is
    Copy code
    global:
      edition: community
      #airbyteUrl: <https://airbyte.brkarlsen.no> 
    
      database:
        secretName: database-secret
        host: "10.61.80.33"
        port: 5432
        name: "airbyte" # Previously `database`
        userSecretKey: "DATABASE_USER"
        passwordSecretKey: "DATABASE_PASSWORD" # Previously `secretKey`
    
      storage:
        storageSecretName: gcs-log-creds # Previously `storageSecretName`
        type: gcs # Change "GCS" to lowercase
        secretName: gcs-log-creds # Previously `storageSecretName`
        bucket:
          log: brk-airbytev2                                                   
          state: brk-airbytev2
          workloadOutput: brk-airbytev2
          activityPayload: brk-airbytev2
        gcs:
          projectId: brk-analytics
          #credentialsJson: | __CREDENTIALS__
          credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
    
    
      workloads:
        containerOrchestrator:
          secretName: gcs-log-creds
          secretMountPath: /secrets/gcs-log-creds
        
     
    postgresql:
      enabled: false
    worker:
      redinessProbe:
        enabled: false
      livnessProbe:
        enabled: false
    webapp:
      enabled: false
    and the snip from secrets is
    Copy code
    target:
        name: gcs-log-creds # The name of the Secret resource that will be created in the cluster.
      data:
        - secretKey: gcp.json  # The key of the secret in the secret resource.
          remoteRef:
            key: airbyte-sa # The key of the secret in the secret manager.
            #property: gcp.json
    could anybody who know how to solve this problem, 🙂
    k
    • 2
    • 1
  • v

    Vasil Boshnakov

    11/03/2025, 11:13 AM
    Hi all, we've deployed the following version on an EC2 machine using external RDS: Chart Version: 2.0.19 App Version: 2.0.1 Now if we try to create connection from Redshift to Redshift the only working sync mode is
    Full refresh | Overwrite
    , if we try to do a
    Incremental | Append + Deduped
    our connection throws the following error:
    LEGACY states are deprecated.
    Copy code
    {
      "failureOrigin": "replication",
      "internalMessage": "LEGACY states are deprecated.",
      "externalMessage": "Something went wrong during replication",
      "metadata": {
        "attemptNumber": 4,
        "jobId": 1
      },
      "stacktrace": "java.lang.IllegalArgumentException: LEGACY states are deprecated.\n\tat io.airbyte.container.orchestrator.bookkeeping.ParallelStreamStatsTracker.getEmittedCountForCurrentState(ParallelStreamStatsTracker.kt:193)\n\tat io.airbyte.container.orchestrator.worker.state.StateEnricher.enrich(StateEnricher.kt:38)\n\tat io.airbyte.container.orchestrator.worker.ReplicationWorkerHelper.processMessageFromSource(ReplicationWorkerHelper.kt:324)\n\tat io.airbyte.container.orchestrator.worker.MessageProcessor.run(ReplicationTask.kt:158)\n\tat io.airbyte.container.orchestrator.worker.MessageProcessor$run$1.invokeSuspend(ReplicationTask.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n",
      "timestamp": 1762164405299
    }
    k
    • 2
    • 6
  • p

    Pranay S

    11/03/2025, 12:57 PM
    I am trying to use airbyte api to fetch workspace names (or any other request) i am getting a forbidden error. currently im on a free trial plan which has around 400 credit tokens please let me know why am i not getting lists
    h
    i
    • 3
    • 2
  • j

    Jorge Gomes

    11/04/2025, 12:11 AM
    Hi Airbyte Support team, I’m facing an issue with the Slack source connector (3.1.7) in Airbyte. The
    users
    stream includes the
    profile
    object (with
    profile.fields
    ), but the
    profile.fields
    object is always empty in the data extracted via Airbyte. When I call the Slack API directly using the same token:
    Copy code
    curl -i -H "Authorization: Bearer xoxb-XXXXXXX" \
      "<https://slack.com/api/users.profile.get?user=XXXXXXX>"
    the response correctly includes populated
    profile.fields
    , for example:
    Copy code
    "fields": {
      "XfQQTSFF38": {"value": "Sales Team", "alt": ""},
      "XfQEPSBBTK": {"value": "Senior Key Account Manager", "alt": ""}
    }
    I’ve confirmed: • The token has
    users.profile:read
    . Could you please help me here? Thank you!
    k
    i
    • 3
    • 5
  • s

    Sivarama Krishnan

    11/04/2025, 4:24 AM
    How do I only run the Airybyte-webapp front end in my local, since all other services are hosted in another machine?
    k
    j
    • 3
    • 8
  • a

    Andrew Pham

    11/04/2025, 9:31 PM
    Hi, I've been trying to debug this issue where I am trying to run a sync from MSSQL to Snowflake and regardless of table size, it will hang for 30 minutes and then say the sync succeeded with 0 rows added. I've tried using version 2.0.0 and 1.8.5, increased memory of my machine to 16gb as well, but none of this has been working.
    k
    • 2
    • 7
  • d

    Data Analytics

    11/04/2025, 10:08 PM
    Has anybody had trouble getting the Google ads connector to pull all history when the account has large gaps where there is no spending / ad results? The account usage is a little intermittent, and the connector appears to stop polling once it reaches one of these periods.
    k
    • 2
    • 1
  • j

    Jimmy Phommarath

    11/05/2025, 10:17 AM
    hello, I uninstall and re install Airbyte via
    abctl
    Volumes generated by my old version are always here but when I run the new install, it creates a new volume instead of using old... How can I set it ? Thanks in advance ! 🙂
    k
    h
    • 3
    • 9
  • a

    Aviad Deri

    11/05/2025, 12:23 PM
    hi all, last week we upgraded from V1.6 to V2 (OSS version, abctl installation). since then, one of our syncs fails with "Broken pipe". we checked all infrastructure, and we have enough recources, we eavn enlagred our server resources without luck. we are trying to sync large table (initial sync 300 million records, incremental sync ~3 million records). the sync is from MSSQL(source) to BQ(destination) and it worked with no problem on V1.6. there is no helpful information in logs and Kapa.ai didn't help us. is it just us? is it known issue? how can we identify the root cause and solve it? Thanks in advance !
    k
    h
    • 3
    • 5
  • a

    aidatum

    11/05/2025, 1:34 PM
    Hi team, company is trying to evaluate Airbyte in OpenShift Platform which is the best and most stable community edition which works along with the V1 helm chart ? Any advice before we jump in
    k
    • 2
    • 3
  • p

    Prithvi Maram

    11/05/2025, 2:47 PM
    Hi everyone! We're setting up Airbyte Cloud for a healthcare customer of ours that’s HIPAA compliant, and I wanted to confirm Airbyte’s current stance on Business Associate Agreements (BAA). Does Airbyte Cloud support signing a BAA, or would we need to use the self-hosted OSS version to remain HIPAA compliant? Any guidance or links to official documentation would be greatly appreciated. Thanks! PS. We're on the standard plan
    k
    j
    • 3
    • 5
  • i

    Ievgeniia PRYTULA

    11/05/2025, 3:08 PM
    Hello, I need some help with an issue: one of my synchronizations ran for 107 hours with no result. I stopped it manually, but since then I haven’t been able to use this connection at all. Each time I try to run it, I get the following error:
    Saved offset is not valid. Please reset the connection, and then increase oplog retention and/or increase sync frequency.
    I can’t simply reset the connection, since some of the streams are incremental - resetting would cause significant data loss and would be very difficult to recover. Do you have any advice on how to restart this connection without wiping the existing data?
    k
    • 2
    • 4
  • o

    Oscar Della Casa

    11/05/2025, 3:23 PM
    Hey, i have been banging my head with this for a while now. Quite annoyingly i was not able to set up a connector to Fortnox, problem seems to be due to the Self-hosted Airbyte's Connector Builder automatically injecting OAuth UI flows (triggering
    /api/oauth/access_token
    redirect loops) even when using
    SessionTokenAuthenticator
    or
    BearerAuthenticator
    without declarative OAuth enabled, making it impossible to create sources with manual token input for custom connectors that only need token refresh, not full OAuth flows. The connector works fine in the builder area when testing but it is impossible once published to add it as a source due to the problem mentioned above. As anyone encountered this before, and fixed it? Thx
    k
    • 2
    • 16
  • f

    Fabian Boerner

    11/05/2025, 10:51 PM
    Hi, is there any way to not use CDC for the mongodb connector?
    k
    • 2
    • 4
  • h

    Harsh Kumar

    11/06/2025, 5:27 AM
    Hi all, I need help. Recently we have started using Airbyte to move the documents from different destinations, mostly like Gong and Google Drive, to destination S3 (in a JSONL format). Now we want to store the metadata of each file it transfers by calling a different internal service. By going through the documentation and talking to the Airbyte community bot, I understood that I need to make changes to the CDK and create a new destination altogether. I did the same and made the necessary changes. After making the changes, we built and created a new destination S3 image and then used the below command for our local. kind load docker-image airbyte/destination-s3:dev -n airbyte-abctl After all this from the local using the Airbyte UI, we created a connection between the Google Drive and the S3. Now when we sync, a new pod comes up with my new destination-S3 image and I can see the sync gets completed. We checked multiple pods, like server, worker and even the destination, but we are not seeing that our code is getting triggered. After doing a little more research, we build the new image for the CDK, but it's the same issue, We have reached out before as well and we have been stuck in this for a while now. Can someone please help us figure it out from here ? Cc: @Devesh Verma
    k
    • 2
    • 1
  • d

    David Aichelin

    11/06/2025, 9:50 AM
    Hi everyone, I’m using Airbyte with the HubSpot destination connector. I’m trying to update properties in the Products table, but I’m running into issues when creating the pipeline — I can’t complete the validation step at the end of the setup. According to the Airbyte documentation , only Companies, Contacts, Deals, and Custom Objects are supported. However, Airbyte still gives me the option to select Products when creating the pipeline. I’d like to know if anyone has successfully used the HubSpot destination connector with the Products table? Thanks! 🙏
    k
    h
    • 3
    • 2
  • s

    Slackbot

    11/06/2025, 1:20 PM
    This message was deleted.
    k
    • 2
    • 1
  • s

    Slackbot

    11/06/2025, 1:24 PM
    This message was deleted.
    k
    • 2
    • 1
  • p

    Pragyash Barman

    11/06/2025, 2:04 PM
    Hi everyone, I was deploying Airbyte OSS with the v2 Helm chart
    airbyte-v2/airbyte 2.0.19
    and ran into two blockers: • The chart pulls
    airbyte/webapp:2.0.1
    from Docker Hub, but that tag doesn’t exist:
    Copy code
    Warning  Failed     4m33s (x5 over 7m24s)   kubelet            Failed to pull image "airbyte/webapp:2.0.1": rpc error: code = NotFound desc = failed to pull and unpack image "<http://docker.io/airbyte/webapp:2.0.1|docker.io/airbyte/webapp:2.0.1>": failed to resolve reference
    • If we pin the webapp to the latest Docker Hub tag
    1.7.8
    so it starts, the UI blows up on the new destination/ new connection page with a TypeError:
    Copy code
    stacktrace: eCt/</n<@https://.../assets/core-bxruo5x4p5.js:388:105190
    How are other OSS users pulling the 2.x images and is there a known workaround for the UI error if we have to stay on 1.7.x? Any guidance would be appreciated—thanks!
    k
    • 2
    • 1
  • s

    Slackbot

    11/06/2025, 4:19 PM
    This message was deleted.
    k
    • 2
    • 1
  • h

    Harsh Panchal

    11/06/2025, 5:11 PM
    Hi there! Hope everyone is doing well. I am using Airbyte OSS with Docker for HubSpot data ETL. Did anyone able to see those web analytics experimental streams in the Schema Mapping. I have tried adding all necessary scope for Private APP API but nothing changed yet. also tried creating new source for HubSpot connector with experimental stream enable, nothing showed up in the schema map.
    k
    • 2
    • 1
  • l

    Lucas Chies

    11/06/2025, 8:45 PM
    Hello team. I'm trying to deploy a new airbyte version on my cluster, actually I already have an older version deployed and running. I deploy the version 2.0.1 in a segregate namespace. But after the instalation, my airflow dags stop to trigger the airbyte. Appears that Airbyte returns 500 because it cannot communicate with the underlying Temporal service Attached is the error log. Someone already saw error like this?
    error-log.txt
  • b

    Bryan Meyerovich

    11/06/2025, 9:50 PM
    how do i disable ctrl+z in connection builder? i just bricked what i've been building
    k
    • 2
    • 1
  • m

    Mike Braden

    11/06/2025, 9:54 PM
    Hello all. I can't seem to get GCS log storage to work. I am using the following values (and have already created the
    airbyte-config-secret
    ):
    Copy code
    storage:
        secretName: "airbyte-config-secrets"
        # -- The storage backend type. Supports s3, gcs, azure, minio (default)
        type: gcs
        # Minio
        #minio:
        #  accessKeyId: minio
        #  secretAccessKey: minio123
        bucket:
          log: airbyte-bucket-appsci-ld-dev
          auditLogging: airbyte-bucket-appsci-ld-dev
          state: airbyte-bucket-appsci-ld-dev
          workloadOutput: airbyte-bucket-appsci-ld-dev
          activityPayload: airbyte-bucket-appsci-ld-dev
        # GCS
        gcs:
          projectId: appsci-ld-vc
          credentialsJsonSecretKey: gcp.json
          credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
    But when I try to re-test a source, the source-declarative-manifest pod fails because the container-sidecar container seems to not have the gcp.json file that is successfully mounted in other deployments in the /secrets/gcs-log-creds/gcp.json location
    Copy code
    "Exception in thread "main" io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.commons.storage.GcsStorageClient]: /secrets/gcs-log-creds/gcp.json
    [...]
    <file not found later in the trace>
    Am I missing something? It seems like the source-declarative-manifest and connector-sidecar yaml has the GOOGLE_APPLICATION_CREDENTIALS set correctly but is not actually mounting the file from the secret in that location. Is something else supposed to mount the file in the shared filesystem for the sidecar container?
    k
    • 2
    • 1
  • s

    Shrey Gupta

    11/07/2025, 12:25 AM
    @kapa.ai I am deploying airbyte 2.0.7 using helm command . When i deploy it, the helm command fail on pre-upgrade hook with the bootloader pod failing. The bootloader pod is failing because it is timeing out ERROR i.m.m.h.i.HealthResult$Builder(exception):123 - Health indicator [airbyte-db-svc:5432/db-airbyte] reported exception: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms (total=0, active=0, idle=0, waiting=0) java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms (total=0, active=0, idle=0, waiting=0) This is possibly because the airbyte-db-svc is not running in the namespace. Shouldnt the service be initialised on its own as part of helm steps or am i missing something here ?
    k
    • 2
    • 7
  • u

    森亮介

    11/07/2025, 5:23 AM
    @kapa.ai I have upgraded to Airbyte 2.0.1. (Deployed on EC2 using
    abctl
    ). I am synchronizing CSV files from S3, but it failed to detect a schema change. I have confirmed directly in the source data that a column has been added. Manually refreshing the schema from the UI also did not detect the change. Has a similar issue been reported? Also, is there a way to work around this?
    k
    • 2
    • 1
  • r

    Ruy Araujo

    11/07/2025, 7:42 AM
    Hello everyone, I am deploying Airbyte on a Google Compute Engine instance. The deployment is successful, and I'm able to run some synchronizations. However, in some cases during the syncs, it gets stuck in an infinite loop with the message
    Pool queue size: 0, Active threads: 0
    I noticed that the error occurs when there are several large tables selected, when only small tables are selected, it does not happen. There are even cases where a single table enters this loop. I have already updated the source and destination connector versions and completely reinstalled Airbyte, but the error persists. Current Configuration • Service: Google Compute Engine • Machine type: c2-standard-8 (8 vCPUs, 32 GB Memory) • Disk Size: 100GB • Airbyte Version: 2.0.19 Connector Versions: • source-bigquery: 0.4.4 • destination-mssql: 2.2.14 • destination-postgres: 2.4.7 Logs for BigQuery > MS SQL Logs for BigQuery > Postgres ``````
    bq_to_pg
  • r

    Ruy Araujo

    11/07/2025, 7:46 AM
    Hello everyone, I am deploying Airbyte on a Google Compute Engine instance. The deployment is successful, and I'm able to run some synchronizations. However, in some cases during the syncs, it gets stuck in an infinite loop with the message
    Pool queue size: 0, Active threads: 0
    I noticed that the error occurs when there are several large tables selected, when only small tables are selected, it does not happen. There are even cases where a single table enters this loop. I have already updated the source and destination connector versions and completely reinstalled Airbyte, but the error persists. Current Configuration • Service: Google Compute Engine • Machine type: c2-standard-8 (8 vCPUs, 32 GB Memory) • Disk Size: 100GB • Airbyte Version: 2.0.19 Connector Versions: • source-bigquery: 0.4.4 • destination-mssql: 2.2.14 • destination-postgres: 2.4.7 Logs for BigQuery > MS SQL
    Copy code
    ```
    Logs for BigQuery > Postgres
    ```
    bq_to_mssql
1...241242243244245Latest