https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • l

    loganeales

    10/16/2025, 10:54 AM
    Hi Community, Is anyone else encountering this issue when trying to upgrade using abctl from: • v1.7.7 -> v1.8.5 (using v0.30.1 - attempted on 2025-10-09) • v1.7.7 -> v2.0.0 (using v0.30.2 - attempted today on 2025-10-16) Common errors extracted for both upgrade attempts:
    Copy code
    [...]
    INFO    Starting Helm Chart installation of 'airbyte/airbyte' (version: 2.0.18)
    ERROR   Failed to install airbyte/airbyte Helm Chart
    ERROR   Unable to install Airbyte locally
    ERROR   unable to install airbyte chart: unable to install helm: cannot patch "airbyte-abctl-cron" with kind Deployment: Deployment.apps "airbyte-abctl-cron" is invalid: spec.selector: Invalid value:
    [...]
    Install command and logs (for v1.7.7 -> v2.0.0):
    Copy code
    abctl local install \
    >   --host <url> \
    >   --low-resource-mode
      INFO    Using Kubernetes provider:
                Provider: kind
                Kubeconfig: /home/ssm-user/.airbyte/abctl/abctl.kubeconfig
                Context: kind-airbyte-abctl
     SUCCESS  Found Docker installation: version 25.0.8
     SUCCESS  Existing cluster 'airbyte-abctl' found
     SUCCESS  Cluster 'airbyte-abctl' validation complete
      INFO    Patching image airbyte/db:1.7.0-17
      INFO    Pulling image airbyte/async-profiler:2.0.0
      INFO    Pulling image airbyte/bootloader:2.0.0
      INFO    Pulling image airbyte/connector-sidecar:2.0.0
      INFO    Pulling image airbyte/container-orchestrator:2.0.0
      INFO    Pulling image airbyte/cron:2.0.0
      INFO    Pulling image airbyte/db:1.7.0-17
      INFO    Pulling image airbyte/server:2.0.0
      INFO    Pulling image airbyte/utils:2.0.0
      INFO    Pulling image airbyte/worker:2.0.0
      INFO    Pulling image airbyte/workload-api-server:2.0.0
      INFO    Pulling image airbyte/workload-init-container:2.0.0
      INFO    Pulling image airbyte/workload-launcher:2.0.0
      INFO    Pulling image temporalio/auto-setup:1.27.2
      INFO    Namespace 'airbyte-abctl' already exists
      INFO    Persistent volume 'airbyte-local-pv' already exists
      INFO    Persistent volume claim 'airbyte-storage-pvc' already exists
      INFO    Persistent volume 'airbyte-volume-db' already exists
      INFO    Persistent volume claim 'airbyte-volume-db-airbyte-db-0' already exists
    Environment context: • AWS EC2 instance • type = t3a.2xlarge Similar slack user issue: https://airbytehq.slack.com/archives/C021JANJ6TY/p1759125686634499 Really hoping someone can assist me 🙏
    k
    b
    • 3
    • 6
  • v

    Vishuna Parague

    10/16/2025, 2:19 PM
    Can i use Airbyte Data Source QuickBooks Desktop, Destination MSSQL>
    k
    • 2
    • 1
  • w

    William Guicheney

    10/16/2025, 4:05 PM
    Hey guys, If anyone from the Airbyte team could reach out to me I'd really appreciate it - I've submitted 2 support tickets on behalf of a client (who are Airbyte Cloud customers) over the past 2 weeks and have not heard anything back
    k
    • 2
    • 1
  • w

    William Guicheney

    10/16/2025, 4:05 PM
    Please just ping me here so I can get some assistance!
    k
    i
    i
    • 4
    • 6
  • k

    Kallin Nagelberg

    10/16/2025, 7:10 PM
    hey everyone; first off sorry if this isn't the right place, but I'm running into some roadblocks after installing airbyte and would love any suggestions.. I've got a connection setup with a
    postgres
    source and
    snowflake
    destination. Trying to do an initial sync (incremental | append + deduped). on a fairly narrow table with ~1mil rows. I've tried restarting a few times, and airbyte keeps getting 'stuck' part-way through the sync. I'm using a fairly beefy 16core 64gb ram ec2 instance, and it looks healthy. The last couple lines in the sync logs are like:
    Copy code
    2025-10-15 17:53:04 source INFO main i.a.i.s.p.c.CtidStateManager(generateStateMessageAtCheckpoint):79 Emitting ctid state for stream public_accounts, state is io.airbyte.integrations.source.postgres.internal.models.CtidStatus@4a9486c0[version=2,stateType=ctid,ctid=(5996,66),incrementalState={"state_type":"cursor_based","version":2,"stream_name":"accounts","stream_namespace":"public","cursor_field":["id"],"cursor":"1057496","cursor_record_count":1},relationFilenode=54853845,additionalProperties={}]
    2025-10-15 17:53:04 source INFO main i.a.c.i.s.r.s.SourceStateIterator(computeNext):50 sending state message, with count per stream: {public_accounts=10000} 
    2025-10-15 17:53:05 replication-orchestrator INFO Records read: 150000 (265 MB)
    2025-10-15 17:53:07 replication-orchestrator INFO Records read: 155000 (274 MB)
    and then nothing. There is no query running on postgres, nor on snowflake, so I imagine something is getting stalled out within airbyte. I wonder if it's possible there's something in the data that's causing it to fail? I can't find any additional logs, so will look into somehow setting up airbyte to produce more logs, but am somewhat out of ideas at the moment. This is my first experience/sync with airbyte, so feeling a bit disheartened at this point 💔
    k
    c
    • 3
    • 4
  • r

    Richard Gao

    10/17/2025, 1:34 AM
    Hello I was looking at this pr: https://github.com/airbytehq/airbyte/pull/61370/commits/928cc4bf832ad5f83800358deb9fd6750d44d35b - upgrading the debezium version for mongodb-source-v2 was not merged. It seems to address an issue that causes unwanted resource usage when the buffer queue is full. This would be nice to have - and we investigated doing it ourselves. However, I saw in debezium > 2.7.0 debezium saves everything in bson. This seems to be a breaking change for the connector as it causes errors such as
    io.airbyte.cdk.integrations.source.relationaldb.state.FailedRecordIteratorException: java.lang.IllegalArgumentException: Non-hex character 'M' at index=0
    . Any suggestions on how to tackle this problem?
    k
    • 2
    • 1
  • r

    Renu Fulmali

    10/17/2025, 10:21 AM
    Hi Team, I am looking to enable Slack notifications for our current Airbyte setup, which is deployed via the Helm chart on EKS. Could you please let me know if there is a way, or any existing environment variables in the current Airbyte Helm chart, that allow us to configure the Slack webhook URL directly through the Helm chart?
    k
    • 2
    • 1
  • c

    Chris O'Brien

    10/17/2025, 6:21 PM
    I'm working with Airbyte for the first time and trying to build pagination into a GraphQL request where the cursor needs to be passed into the GraphQL body. I can see lots of great examples of pagination in the request but not as much for injecting it into the GraphQL body. Is there some documentation I've missed?
    k
    • 2
    • 7
  • k

    Kallin Nagelberg

    10/17/2025, 7:10 PM
    Hey team! 👋 I've been struggling with OOMKilled errors on replication jobs using Airbyte Helm chart v2.0.18 (community edition). After extensive debugging, I discovered that
    global.jobs.resources
    in values.yaml is not being used by the chart templates. Steps to reproduce: 1. Install Airbyte using Helm v2 chart (2.0.18) 2. Set job resources in values.yaml: yaml
    Copy code
    global:
      jobs:
        resources:
          requests:
            cpu: "1"
            memory: "8Gi"
          limits:
            cpu: "4"
            memory: "16Gi"
    1. Apply with
    helm upgrade
    2. Check the configmap:
    kubectl get configmap airbyte-airbyte-env -n airbyte-v2 -o yaml | grep JOB_MAIN_CONTAINER
    3. Result: All
    JOB_MAIN_CONTAINER_*
    values are empty strings
    ""
    4. Replication job pods still use default 2Gi memory limits and OOMKill on large syncs Root cause: The chart templates in
    templates/config/_workloads.tpl
    read from
    .Values.global.workloads.resources.mainContainer.*
    not from
    .Values.global.jobs.resources.*
    Workaround that works: yaml
    Copy code
    global:
      workloads:
        resources:
          mainContainer:
            cpu:
              request: "1"
              limit: "4"
            memory:
              request: "8Gi"
              limit: "16Gi"
    This properly populates the
    JOB_MAIN_CONTAINER_*
    environment variables and the replication jobs respect the limits. Questions: 1. Is
    global.jobs.resources
    supposed to work but there's a bug? 2. Or should the docs be updated to show
    global.workloads.resources.mainContainer
    as the correct path? 3. Should
    global.jobs.resources
    be deprecated/removed from the values schema if it's not used? Happy to provide more details or open a GitHub issue if helpful!
    k
    • 2
    • 10
  • s

    Slackbot

    10/17/2025, 9:05 PM
    This message was deleted.
    k
    • 2
    • 1
  • p

    Paul

    10/18/2025, 8:01 AM
    Just wondering with conneciton builder, how does airbyte know when its being RATE LIMITED, what does it look for
    k
    j
    • 3
    • 2
  • p

    Paul

    10/18/2025, 12:41 PM
    Using connector builder, can i return a list of codes that i want to use as paramters into the child stream - but it only accepts 50 codes at a time
    k
    • 2
    • 8
  • p

    Paul

    10/19/2025, 12:17 PM
    When running a newer version of abctl, do i need to also provide the values/secrets files? Or are these settings maintained,even on local install.
    k
    • 2
    • 1
  • b

    Berke Eruz

    10/19/2025, 1:27 PM
    Has anyone created a Hubspot destination on Airbyte? Since I am running Airbyte in a private EC2 which I connect to it via VPN, Hubspot cannot connect to
    /auth_flow
    because my Airbyte is not exposed to public. Is there a work around to it?
    k
    h
    • 3
    • 3
  • g

    gaurav vivek

    10/19/2025, 2:00 PM
    I am running Airbyte 1.8 using helm ,, it was working but now I am getting below error while check_connection "failureOrigin": "airbyte_platform", "externalMessage": "Workload failed, source: airbyte_platform", "internalMessage": "Unable to persist the job Output, check the document store credentials.", While create source is successful
    k
    h
    • 3
    • 3
  • j

    Johannes Haposan Napitupulu

    10/20/2025, 11:16 AM
    Hi. We are using airbyte 1.8 here. How to streaming data from postgres to iceberg+nessie table, but following the existing iceberg table partitioning? We pre-created the iceberg table with partitioning. Then when the sync is over, there is new folder on the object storage, containing the data, but there is no partitioning. Meanwhile if we use normal INSERT query to the iceberg table, it automatically apply the partitioning (The folder location and hierarchy level are identical)
    k
    • 2
    • 1
  • c

    Chris Herron

    10/20/2025, 4:21 PM
    For those who have encountered problems with Zscaler SSL Inspection (or similar endpoint MITM tech) preventing Airbyte Docker images from spinning up; What domains / URLs did you whitelist in order to unblock?
    k
    • 2
    • 1
  • t

    Tom Sweeting

    10/20/2025, 5:13 PM
    Hi All - has anyone had any success getting past OOMKilled errors for larger tables to postgres destination? An example problem stream I have is initial load for MySQL CDC source -> Postgres destination, where the table contains 3.7m records / 5.6GB data. I have tried tuning the resources and confirmed my settings are applying, but my syncs reliably fail whenever i’m trying to sync a table where the entire dataset does not fit into memory, logs show that it is getting past fetching data and the “typing and deduping” steps, and is moving on to copying data to the destination. According to documentation, Airbyte is supposed to batch into 10000 row sets - but if my math is correct Airbyte should be batching the data in approximately 15MB per batches. This seems contrary to what im actually seeing where the only way to get a stream to succeed is to allocate at least 2x the data size in memory. Im currently running Airbyte 2.0.0 (Helm chart 2.0.18), but this behavior has been consistently a problem in older versions too. Over the past weeks I have tested pretty much every version between 1.6.2 and 2.0.0. Happy to share more info if it is helpful.
    k
    • 2
    • 4
  • k

    Kallin Nagelberg

    10/20/2025, 5:26 PM
    Hi team! 👋 I'm running into an issue with increasing memory for connector pods in Airbyte 2.0 on Kubernetes (Helm v2 charts). ***Problem:*** My postgres source connector is being OOMKilled with the default 2GB limit, and I need to increase it to 4GB. ***Documentation gap:*** The "Configuring Connector Resources" page (https://docs.airbyte.com/platform/operator-guides/configuring-connector-resources) says to "set the following variables in your values.yaml file" and lists: - JOB_MAIN_CONTAINER_MEMORY_REQUEST - JOB_MAIN_CONTAINER_MEMORY_LIMIT - etc. But it doesn't specify where in values.yaml these should go. After digging through GitHub issues and the Helm chart structure, I think they go under
    workloadLauncher.extraEnv
    , but I'm not certain. ***Questions:*** 1. Is
    workloadLauncher.extraEnv
    the correct place for these variables in Helm v2? 2. Is there newer documentation or a different approach I should be using instead? 3. Would it be possible to update the docs to show the full values.yaml structure? Currently using: - Airbyte 2.0 with Helm v2 charts - K3s cluster - RDS Postgres → Snowflake connection Thanks for any guidance! The product is great, just trying to get the resource configuration sorted. 🙏 Thanks!
    k
    t
    • 3
    • 52
  • j

    Johannes Haposan Napitupulu

    10/21/2025, 8:16 AM
    Hi I want to upgrade to airbyte version 2.0.0 but still using helm v1. So the chart version pointing to
    version          = "1.8.5"
    Then in values.yaml, should I change the global.version to "2.0.0" and also put the image name and tag in global.image.tag? I dont want to change to helm v2 because it will required some effort, and if I force it it will give error like
    Copy code
    Message: Could not resolve placeholder ${AIRBYTE_URL}
    Copy code
    Message: Could not resolve placeholder ${STORAGE_BUCKET_AUDIT_LOGGING}
    k
    • 2
    • 13
  • l

    Luc Lagrange

    10/21/2025, 8:58 AM
    Hello there 🙂 I created a sync from BigQuery (source) to HubSpot (Destination). I works fine. However, I added a column in my BigQuery model, but have not seen this column appear in the Airbyte mapping tab.
    k
    • 2
    • 2
  • i

    Iwo

    10/21/2025, 10:46 AM
    Hello everyone 🙂 I'm testing a self hosted installation and I don't seem to be able to create new workspaces via the UI. I ran the
    abctl install local
    and I have the
    organization-admin
    permission but no way to manage org/workspaces in the UI. I'm attaching a screenshot of what I see in the UI. I'm able to create workspaces over API but they don't seem to be available for my user either.
    h
    j
    • 3
    • 3
  • k

    Kimmo Lahdenkangas

    10/21/2025, 6:07 PM
    Hi folks! Has anybody experienced something like this on a kubernetes deployment (EKS, AB 1.8, Helm v2)? Seeing every nth sync fail due to
    An internal transient Airbyte error has occurred. The sync should work fine on the next retry.
    Same pipeline configurations worked fine on an EC2 based deployment. It doesn't give much to go on but sounds like it could be a resource availability issue. I've been trying to tweak some configs, but have not been able to resolve yet.
    • 1
    • 1
  • g

    Guillermo Torres

    10/21/2025, 7:58 PM
    Hi, I want to create OAuth override credentials for a workspace and source type. I'm using this guide: https://reference.airbyte.com/reference/workspaceoauthcredentials The endpoint returns 200ok, but I still can't use my own credentials. Any ideas on where I'm missing something? I already have the redirects and my credentials configured in the Google console?
    k
    • 2
    • 4
  • c

    Chris Farrington

    10/22/2025, 3:03 AM
    Anyone ever have trouble with creating sources via the api - specifically when I create a source, the delivery method is not set, which makes the source break everything else it is used for unless I go into the ui and manually set it myself
    k
    • 2
    • 1
  • p

    Paul

    10/22/2025, 11:58 AM
    Can i use a partition router in version 1.8.5 by using custom components - show me
    k
    • 2
    • 4
  • t

    Thiago

    10/22/2025, 1:48 PM
    still having this issue after trying to install airbyte using the latest abctl version after trying to access it // HttpError { "i18nKey": "errors.http.notFound", "i18nParams": { "status": 404 }, "name": "HttpError", "requestId": "4jqEJZXqfhVtwdBDxQv3p2", "request": { "url": "/api/v1/workspaces/get", "method": "POST", "headers": { "Content-Type": "application/json" }, "data": { "workspaceId": "48e3540c-07e9-403d-a5b3-a4190b6362d7" } }, "status": 404, "response": { "message": "Internal Server Error: Could not find configuration for STANDARD_WORKSPACE: 48e3540c-07e9-403d-a5b3-a4190b6362d7.", "exceptionClassName": "io.airbyte.commons.server.errors.IdNotFoundKnownException",
    k
    • 2
    • 1
  • b

    Bohdan Tertyshnyi

    10/22/2025, 1:50 PM
    Hello my airbyte still not running any sync on 2.0 version. (original posted on releases date https://airbytehq.slack.com/archives/C021JANJ6TY/p1760524222333319) Everything was working well before the update, but after upgrading to 2.0, the sync won't start. The error I get is 'Airbyte could not start the process within time limit. The workload was never claimed.' Airbyte runned without low-resource mode on GCP VM with 8 vCPUs, 16 GB Memory. Any suggestions to fix it?
    k
    j
    • 3
    • 4
  • m

    Meghana

    10/22/2025, 3:53 PM
    @kapa.ai airbyte losing DB connection due to authentication issue
    k
    • 2
    • 1
  • p

    Paul

    10/22/2025, 9:08 PM
    m running AIrbyte 1.8.5 and am trying to use GroupingPartitionRouter but cant seem to get it to work. Is it possible?
    k
    • 2
    • 1