https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • g

    Giridhar Gopal Vemula

    11/19/2025, 3:39 PM
    Hello Team , one quick question. for storing and using the secrets, is it possible to use Akeyless . does Airbyte supports custom integrations?
    k
    h
    • 3
    • 3
  • t

    Tanuja

    11/19/2025, 4:16 PM
    #C021JANJ6TY Anyone facing issues with
    Google Ads - Airbyte connector
    ? Our pipelines are not coming up from past few hours.
    k
    • 2
    • 1
  • d

    Dan Schlosser

    11/19/2025, 10:02 PM
    Hey all, anyone running into issues with the HubSpot connector? We’re seeing an issue where as of yesterday the
    contacts
    field is all of a sudden
    null
    for all of the deals that we sync into BigQuery from Airbyte – no config changes (definitely selected in the Airbyte mapping UI), just coming in as null.
    k
    h
    m
    • 4
    • 3
  • m

    Mochamad Eka Pramudita

    11/20/2025, 4:34 AM
    Hello all, I have a data source located on-premise in Cloudera, and it can be accessed through HDFS, Hive, or Impala. Has anyone here had experience using any of these as a source? I asked an AI assistant, and it seems that Airbyte does not provide a native source connector for HDFS, Hive, or Impala. Any suggestions or workarounds would be appreciated. Thank you.
    k
    • 2
    • 1
  • y

    Yuvaraj Prem Kumar

    11/20/2025, 9:49 AM
    Hi all, I am experiencing this behaviour with the v2 Helm charts, deployed in AWS EKS. When a replication job completes or when I trigger a sync, the airbyte UI does not update immediately, instead I have to refresh the page before I see the new status.
    k
    • 2
    • 1
  • b

    Barun Pattanaik

    11/20/2025, 10:12 AM
    Hi Team , I am using airbyte to connect from mongo to bigquery , but the data i am getting its in a single column as a json , i need the data to be splitted into different columns as in my native original table.What is the solution for this basic normailzation ? In my connection setting its not showing normalisation option , is it because i am using free trial version ?
    k
    • 2
    • 8
  • k

    Kevin O'Keefe

    11/20/2025, 6:42 PM
    Hello. I am new to the community and having the "best" time trying to figure out deployment. I am currently deploying Airbyte OSS to GCP through argoCD. I am able to get everything running as long as I do not try to enable auth: I am using external secrets to pull in secrets from gcp to K8s. The documentation is very unclear a to what is needed on this. I am adding what I think is expected to the gcp secret
    Copy code
    {
      "dataplane-client-id": "Redacted",
      "dataplane-client-secret": "Redacted==",
      "instance-admin-client-id": "Redacted",
      "instance-admin-client-secret": "Redacted=",
      "jwt-signature-secret": "Redacted=",
      "instance-admin-email": "Redacted=",
      "instance-admin-password": "Redacted"
    }
    This is getting properly imported into the server container. It seems like if I create any secret with the email and password it breaks all of the other generated secrets. I have tried splitting these two into another secret and pointing the auth to that, but then it does not autogenerate the airbyte-auth-secrets secret like it does if I have auth disabled. Has anyone else had to deal with this? Sorry if this has been answered before, I searched and there was not much to go on. Here is the values file I am using
    Copy code
    global:
      serviceAccountName: &service-account-name airbyte-sa
      edition: community
      airbyteUrl: "airbyte.k8s.haus"
    
      annotations:
        <http://argocd.argoproj.io/sync-wave|argocd.argoproj.io/sync-wave>: "2"
    
      database:
        type: external
        secretName: "airbyte-config-secrets"
        host: "10.23.1.3"
        port: 5432
        database: "postgres"
        userSecretKey: "database-user"
        passwordSecretKey: "database-password"
    
      auth:
        enabled: true
        secretName: "airbyte-auth2-secrets"
        instanceAdmin:
          firstName: "security"
          lastName: "admin"
          emailSecretKey: "instance-admin-email"
          passwordSecretKey: "instance-admin-password"
    
    minio:
      enabled: false
    
    postgresql:
      enabled: false
    
    serviceAccount:
      create: true
      name: *service-account-name
      annotations:
        <http://iam.gke.io/gcp-service-account|iam.gke.io/gcp-service-account>: "Redacted"
    k
    • 2
    • 1
  • a

    Aviran Moshe

    11/20/2025, 10:51 PM
    Hey everyone, Is anyone else seeing problems with the Google Ads connector after the auto-update earlier today? Our sync, which normally finishes in ~6 minutes, is now taking hours, and the connector is also producing duplicate data in the raw tables. We didn’t change anything on our side, the issues started right after the update. If anyone has more information or knows whether this is a known issue, I’d really appreciate it. Thanks!
    plus1 2
    k
    • 2
    • 1
  • m

    Michal Krawczyk

    11/21/2025, 10:30 AM
    Since upgrade to Airbyte 2.0 I observe that some jobs don't start even though Airbyte reports a successful status (0 bytes | no records loaded). It looks like Airbyte starts discover step but then it doesn't start orchestrator, source, destinations pods. My airbyte is released to Kubernetes on AWS EKS using helm chart. Has anyone see similar issues or know what could cause it? Logs in the 🧵
    k
    a
    • 3
    • 11
  • m

    Mahmoud Mostafa

    11/21/2025, 2:31 PM
    Hello everyone, need help here I have an issue related to a parent child streams Where I built a custom connector using the low code ui builder to fetch data from rasayel api which is an omni channel for whatsapp messages I wish to fetch conversations along with messages in each conversation to do so I use graphql and there is a relation between messages and conversations to fetch any message you have to pass conversation ID as an argument in the message query My current situation is that I have the connector configured correctly but it’s stuck in sync running for messages stream with no visible errors in the logs if anyone can help with this it will be much appreciated I am using airbyte 2.0.1 deployed with abctl on my local laptop
    k
    • 2
    • 10
  • d

    Diego Dias

    11/21/2025, 5:12 PM
    Hello everyone, I'm trying to use Airbyte to sync my data, and I've created multiple connections, each with many schemas mapped for synchronization. However, I expected that, if the VM resources were available, Airbyte would run multiple connections in parallel, but instead, all connections are running sequentially. Do I need to change any configuration to enable parallel execution? Airbyte is running on a VM with Debian 12, 8 vCPUs, and 32 GB of memory. Below is a snippet of my values.yaml:
    Copy code
    worker:
      enabled: true
      replicaCount: 2
      maxNotifyWorkers: "20"
      maxCheckWorkers: "20"
      maxSyncWorkers: "20"
      resources:
        requests:
          cpu: "500m"
          memory: "1Gi"
        limits:
          cpu: "1000m"
          memory: "2Gi"
      extraEnv:
        - name: MAX_SYNC_WORKERS
          value: "20"
        - name: MAX_CHECK_WORKERS
          value: "20"
        - name: MAX_DISCOVERY_WORKERS
          value: "10"
        - name: MAX_SPEC_WORKERS
          value: "10"
    
    workloadLauncher:
      enabled: true
      replicaCount: 2
      resources:
        requests:
          cpu: "500m"
          memory: "1Gi"
        limits:
          cpu: "1000m"
          memory: "2Gi"
      extraEnv:
        - name: WORKLOAD_LAUNCHER_PARALLELISM
          value: "25"
    k
    • 2
    • 2
  • t

    Tigran Zalyan

    11/21/2025, 6:36 PM
    Hey everyone! I was trying to migrate Airbyte from Helm chart version 1.8.2 (v1) to 2.0.1 (v1) and I'm running into a lot of errors. When using an external Postgres database with an explicitly set username via
    global.database.user
    , the airbyte-bootloader fails with:
    Copy code
    Error: couldn't find key DATABASE_USER in Secret airbyte-airbyte-secrets
    The workaround was to move the value into the secret under that key. I'm also seeing another error in airbyte-server when using GCS as the storage provider:
    Copy code
    io.micronaut.context.exceptions.BeanInstantiationException: Bean definition [io.airbyte.server.apis.controllers.SourceDefinitionApiController] could not be loaded:
    Error instantiating bean of type [io.airbyte.commons.storage.GcsStorageClient]: Is a directory
    The last error doesn't allow Airbyte to successfully start. Any ideas why this might be happening?
    k
    • 2
    • 1
  • d

    Dan Cook

    11/21/2025, 6:37 PM
    (FYI, we use Airbyte Cloud) If you've been having problems with your Google Ads syncs over the past few days, especially if you have custom reports built from GAQL then read on: We originally concluded that tab characters in our custom GAQL was the root cause of a bunch of recent failed syncs, and in fact replacing tabs with spaces did improve a few things. But the actual root cause was the moving around by Airbyte staff of our version of the Google Ads source connector. Over the past 3 days it has been the following values, in sequential order. The two versions in the middle of the sequence, variants of v4.1, were the cause of a bunch of failed syncs. Something in that connector version does not work nicely with custom GAQL reports: • v4.0.2 • v4.1.0-rc.8 • v4.1.0 • v4.0.2 The final change back to v4.0.2 took place this AM, presumably after Airbyte staff heard from enough customers about failing syncs. Once this one change was made our syncs started working again. Some issues: 1. why would Airbyte Cloud customers like us get moved to a release candidate (v4.1.0-rc.8) without any foreknowledge? 2. the first release candidate dropped in August, and the changelog for v4.1.0-rc.1 specifically mentions custom queries. Several of the subsequent PRs do too. How did very obvious, very fatal problems with custom queries make it all the way to an official release?
    k
    • 2
    • 2
  • j

    Jonathan Clemons

    11/21/2025, 9:56 PM
    Hey All. I am running into an issue with a connector that I built in the connector builder where syncs are getting stuck. From some testing, I think the issue is being caused by rate limiting from the source api. I am working on dialing in the API budget, but this has been tricky because I cannot see logs from the api calls being made to the source. The connection is not failing, so I am not able to diagnose via an error code. Is it possible to get more granular logs so I can troubleshoot why the connector is getting stuck or does anyone have suggestions for further troubleshooting steps?
    k
    • 2
    • 1
  • g

    Gaurav Jain

    11/22/2025, 9:26 AM
    @kapa.ai What do you store in this folder root@airbyte-prod-new:~/.airbyte/abctl/data/airbyte-volume-db/pgdata/base/16384#
    k
    • 2
    • 4
  • b

    Bogdan

    11/24/2025, 9:53 AM
    Hi everyone, I'm using Airbyte for the first time via abctl, and I've encountered an issue with sync parallelism. It seems that each sync is waiting for the previous one to complete before starting, which is slowing down the process. Is there a way to configure Airbyte to allow multiple syncs to run in parallel?
    k
    • 2
    • 4
  • s

    Samy-Alexandre LICOUR

    11/24/2025, 11:50 AM
    Hello, I am trying to install Airbyte with abctl and logs in a GCP bucket.
    Copy code
    sudo abctl local install --secret secrets.yaml --values values.yaml
    Here are my values and secrets files: values.yaml
    Copy code
    global:
      storage:
        type: "GCS"
        secretName: airbyte-config-secrets
        bucket: # GCS bucket names that you've created. We recommend storing the following all in one bucket.
          log: dev-airbyte-abctl-logs-
          state: dev-airbyte-abctl-logs
          workloadOutput: dev-airbyte-abctl-logs
        gcs:
          projectId: source-plm-dev
          credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
      jobs:
        resources:
          ## Example:
           requests:
              memory: 8Gi
              cpu: 2
          # -- Job resource requests
          #requests: {}
          ## Example:
           limits:
              cpu: 4
              memory: 16Gi
          # -- Job resource limits
          #limits: {}
      auth:
        enabled: false
    secrets.yaml :
    Copy code
    apiVersion: v1
    kind: Secret
    metadata:
      name: airbyte-config-secrets
    type: Opaque
    stringData:
      gcp.json: "..."
    I am getting this error:
    Copy code
    Caused by: java.io.IOException: Is a directory
    There seems to be a problem with credentialsJsonPath: /secrets/gcs-log-creds/gcp.json even though I followed the documentation. Thanks for your help!
    k
    • 2
    • 7
  • a

    Akshata Shanbhag

    11/25/2025, 7:36 AM
    Hello i am out of ideas now so could you please help me out here. I have sync job failing with error "Workload failed, source: workload-monitor-heartbeat". the fact is that the replication pod is not created at all, even though the log shows that it is being launched. I have the workload launcher, worker, server, temporal all running in non-disrupt mode. No logs indicate this silent failure. The job succeeds on re-attempting. any ideas on what might be causing the replication pod from not starting? Between 1:07 and 1:19, it looks like it was attempting to create a pod but did not and failed silently. How could i mitigate this issue?
    l
    k
    • 3
    • 2
  • k

    kapa.ai

    11/25/2025, 2:55 PM
    You’re connected to kapa.ai, an assistant focused specifically on Airbyte. Your last message only says “Hi guys,” without a question or problem description. If you have an Airbyte-related question (connectors, deployments, errors, features, etc.), please share the details and I’ll help based on the available Airbyte docs, GitHub issues, and forum posts.
  • m

    Mihály Dombi

    11/25/2025, 2:55 PM
    Hello Team! I'm trying to add a new stream to a connector where the Retrieval type is Asynchronous Job. There is a creation endpoint that returns a Location (URL with
    report_task_id
    ) in the response header that can be polled to check its status. The problem is that this Location is in the response header but the response doesn't have a payload. Because of this the _get_creation_response_interpolation_context is always failing. Also in the connector builder the HTTP Response Format section doesn't have an option that tells that the response doesn't have a body. Is there a workaround for this, or the CDK should be extended with this option? The report creation endpoint in question: https://developer.ebay.com/api-docs/sell/marketing/resources/ad_report_task/methods/createReportTask#request.dateFrom
    k
    h
    • 3
    • 2
  • l

    Lucas Segers

    11/25/2025, 2:56 PM
    Hey guys, anyone with any "Plain-text" Http Requester on a custom Builder facing any issues after 2.0? I find it hard that I'm the only one that has broken PlainText streams after upgrading, so I must be missing something? 😛
    k
    • 2
    • 1
  • d

    Dyllan Pascoe

    11/25/2025, 9:04 PM
    Greetings. Is there a way to delete a builder connector from my organization once it's been published?
    k
    • 2
    • 2
  • r

    Rob Kwark

    11/26/2025, 1:39 AM
    I am trying to do a create connection api call for Mysql -> s3 on OSS, and it works when I use the api call for my other connections (snowflake, postgres) when I use the same flow. However, I am getting a { "status": 500, "type": "https://reference.airbyte.com/reference/errors", "title": "unexpected-problem", "detail": "An unexpected problem has occurred. If this is an error that needs to be addressed, please submit a pull request or github issue.", "documentationUrl": null, "data": { "message": "Something went wrong in the connector. logs:Workload failed, source: airbyte_platform" } } when I make the basic post request: { "name": "badump1", "sourceId": "sourceUUID", "destinationId": "destUUID", "schedule": { "scheduleType": "manual" } }. This is only happening with the mysql connection.
    k
    • 2
    • 10
  • k

    Kevin O'Keefe

    11/26/2025, 4:00 AM
    Hello Airbyte community. I am new here and have set up our first Airbyte community edition deployment. I want to reach out to the community and ask if you have any helpful suggestions for optimizing the environment that you may have discovered during your use of the product. I am on version 1.9.1 of the Helm deployment to K8s using ArgoCD, and things are running well at the moment. If anyone is willing to share any helpful items related to your experience and setup woes, please let me know. Thank you for your attention and participation.
    k
    j
    • 3
    • 2
  • j

    JadperNL

    11/26/2025, 9:03 AM
    Hi is there any methods to install Airbyte without kubernetes? All our systems cannot run kubernetes in our case we currently make use of docker compose for any other deployments.
    k
    • 2
    • 1
  • i

    Isaac Steele

    11/26/2025, 3:45 PM
    I am getting different responses when I use
    requests.get()
    and the python
    airbyte-api
    on sources for my source configuration information. For example, when I get run a
    GET
    on
    ...v1/sources
    , my sources have connection information like
    configuration":{"auth_type":"Client","client_id":"myclient123456789","is_sandbox":true,"client_secret":"**********","refresh_token":"**********","streams_criteria":[],"stream_slice_step":"P30D","force_use_bulk_api":false}
    . But when I use the python API
    get_source()
    or
    list_sources()
    the configuration information returned is just:
    configuration=SourceAirtable(credentials=None, SOURCE_TYPE=<SourceAirtableAirtable.AIRTABLE: 'airtable'>)
    . This is an example from my salesforce connector, but the same is true for my other connectors as well, just giving the
    Airtable
    for configuration on all of them. Is this a python-api bug? How can I get/view my masked source configuration data from the python api?
    k
    • 2
    • 1
  • n

    Nikita

    11/26/2025, 4:36 PM
    Hello! I'm trying to establish export from GA4 to Clickhouse. This is screenshot from the traffic_sources table. Is there a way that in the endDate and startDate we can also include time? Thanks!
  • f

    Francis Carr

    11/26/2025, 5:00 PM
    Hey all. I am experiencing timeout issues when transferring data from Snowflake. I am currently on the official Snowflake Source
    airbyte/source-snowflake
    on version
    1.0.8
    . The source table within Snowflake is very large and it seems I am hitting a credentials timeout of around 6 hours. What seems to be happening; 1. Airbyte makes a query for all data from the Snowflake table and this is staged internally within an S3 bucket 2. Airbyte then starts transferring this data out of the bucket piece-by-piece sending it from the Source to the Destination, the destination being BigQuery 3. After 6 hours or so, sometimes longer, we run into a
    403 Forbidden
    on reading the data and the job within Airbyte runs another attempt. a. After 5 attemps of around 6 hours it gives up and reports it as a failure. b. When the job starts again we seem to start from the very beginning; even though the job has written a lot of records to the BigQuery destination, it doesn't seem to save where it has got to in the Source. The sync mode is
    Incremental | Append
    with a Date as the Cursor Has anyone had any similar experiences with this type of Snowflake timeout? It sounds like it could be something that can be configured on the Snowflake side, but we aren't sure what configuration we could use to control this. There is documentation around the 6 hour timeout in the Using Persisted Query Results from Snowflake but it doesn't really say much beyond
    A new token can be retrieved to access results while they are still in cache.
    We are also not sure what we can do on our side as this seems to be internal to the JDBC driver that reads from Snowflake. Maybe there is a JDBC query param we could add? We know we can break up this data and there are lots of workarounds like that, but it still avoids the core issue that can arise again. Any recommendations would be greatly appreciated! 🙏
    k
    • 2
    • 3
  • a

    Alex Tasioulis

    11/26/2025, 5:27 PM
    hi, I am exporting from MongoDB into Snowflake - everything works fine for smaller tables. I have some tables that are a bit too big in MongoDB and I am worried they will take like 2 days to copy over by which point the MongoDB oplog will have expired and incremental syncing would fail. Is that what will happen if initial sync takes longer than oplog window or does Airbyte handle this gracefully? If not, what can I do to deal with it?
    k
    • 2
    • 1
  • b

    Baudilio García Hernández

    11/26/2025, 5:54 PM
    185 - Connector entitlement not available. actorDefinitionId=c0b24000-d34d-b33f-fea7-6b96dc0e5f0d organizationId=OrganizationId(value=00000000-0000-0000-0000-000000000000)
    k
    • 2
    • 1
1...241242243244245Latest