https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • m

    Matheus Dantas

    08/25/2025, 7:30 AM
    Hello people, I have one simple question here: What is the reason for using
    abctl
    instead of
    helm
    ? Helm is a common tool used to deploy applications to Kuberentes. What is the value added using
    abctl
    ?
    k
    • 2
    • 1
  • i

    Iulian

    08/25/2025, 10:39 AM
    will there ever be some exposed metrics for airbyte OSS? So taht prometheus operator can pick them up?
    k
    • 2
    • 1
  • h

    Harel Trabelsi

    08/25/2025, 11:52 AM
    Hey there! I have recently updated my company's on-prem Airbyte from version 1.5 to 1.8. Ever since the update I've been facing a problem with scheduled syncs. I have +-25 syncs supposed to run at the same time. Before updating to version 1.8 all syncs would work perfectly every time, and now 2/3 syncs consistently fail, different connections every time but Airbyte won't report them as failed but successful. Tracing the logs and events I could see multiple replication pods delayed for 30 minutes until they start and then immediately close with Pod Failure error. Has anyone experienced anything similar? Are there any major resources bumps necessary to be done from version 1.5 to 1.8?
    k
    • 2
    • 1
  • k

    keshav

    08/25/2025, 12:14 PM
    I'm experiencing persistent installation failures with abctl on AWS EC2 and would appreciate any guidance or insights. Environment Setup: Instance: AWS EC2 t3.large (2 vCPUs, 8GB RAM, 45GB storage) OS: Amazon Linux 2023 Docker: 25.0.8 abctl: 0.30.1 Region: ap-south-1 (India) The Problem: Every abctl local install attempt fails at the exact same point - during nginx/ingress-nginx Helm chart installation. The process runs for 75+ minutes before timing out. Command used: bash abctl local install --host myairbytezin.duckdns.org --insecure-cookies --port 8000 Error Pattern: ✅ Cluster creation succeeds ✅ Initial setup completes ❌ Gets stuck at: Installing 'nginx/ingress-nginx' (version: 4.13.1) Helm Chart ❌ Repeated timeout errors: W0823 043938.002418 13320 reflector.go:561] failed to list *unstructured.Unstructured: Get "https://127.0.0.1:34281/apis/batch/v1/namespaces/ingress-nginx/jobs": dial tcp 127.0.0.134281 i/o timeout Resources Confirmed Sufficient: Storage: 36GB free (21% usage) Memory: 7.0GB available (only 387MB used) Docker: Healthy with 6.7GB reclaimable space
    k
    • 2
    • 1
  • j

    Jeff Gr

    08/25/2025, 1:01 PM
    Hello Everyone, I need to setup Airbyte in an airgapped environment, using Rancher as the K8S manager. --> I can helm install it, the app works alright --> what I can't do is setup any of the connectors. I have the source-* and destination-* images I'm interested into already available in a local, reachable registry, but looks like Airbyte is still looking for something on dockerhub and, therefore, breaking. is there any definitive guide for airgapped setups such as this one? thanks in advance!
    k
    • 2
    • 1
  • y

    Yuvaraj Prem Kumar

    08/25/2025, 1:28 PM
    Hi all, anyone facing this issue on the community version? According to https://docs.airbyte.com/platform/enterprise-setup/upgrade-service-account, this should only be relevant for Self-Managed Enterprise - so why does this happen for the community version?
    Copy code
    2025-08-25 13:20:01,087 [main]	ERROR	i.a.b.ApplicationKt(main):28 - Unable to bootstrap Airbyte environment.
    java.lang.IllegalStateException: Upgrade to version AirbyteVersion{version='1.8.1', major='1', minor='8', patch='1'} failed. As of version 1.6 of the Airbyte Platform, we require your Service Account permissions to include access to the "secrets" resource. To learn more, please visit our documentation page at <https://docs.airbyte.com/enterprise-setup/upgrade-service-account>.
    	at io.airbyte.bootloader.AuthKubernetesSecretInitializer.checkAccessToSecrets(AuthKubernetesSecretInitializer.kt:58)
    	at io.airbyte.bootloader.Bootloader.load(Bootloader.kt:75)
    	at io.airbyte.bootloader.ApplicationKt.main(Application.kt:25)
    k
    • 2
    • 1
  • s

    Soo Peng Kiat

    08/25/2025, 3:43 PM
    Hey, has anyone encountered this issue? I have a connection syncing data from MySQL to Redshift that was running fine for months. However, after I refreshed the stream for one specific table, the load didn’t complete and we had to cancel it. Since then, all tasks have been failing — though other tables are still updating, except for the one I attempted the full load on.
  • v

    Viru Janadri

    08/25/2025, 8:15 PM
    Hi Team, I'm reaching out to request a new connector for YugabyteDB. YugabyteDB is a distributed SQL database that is based on PostgreSQL, it's essentially a fork/extension of PostgreSQL designed for distributed, cloud-native environments. While PostgreSQL uses the
    ctid
    keyword for certain internal operations, YugabyteDB uses
    ybctid
    instead. This difference is documented in the YugabyteDB GitHub repository: yugabyte/yugabyte-db#1284 Given this key difference in syntax, would it be possible for the Airbyte team to develop a dedicated YugabyteDB connector? While YugabyteDB maintains PostgreSQL compatibility in most areas, the
    ybctid
    vs
    ctid
    difference could cause issues when using the standard PostgreSQL connector. Thanks for considering this request!
    k
    • 2
    • 4
  • a

    Aya Zeplovitch

    08/25/2025, 11:56 PM
    Hi community -- I just tried to upgrade the version and now all my connections are gone. We are also unable to reconfigure any of them. Any idea on how to fix this? We are running it on a VM using abctl
    k
    a
    • 3
    • 3
  • k

    kapa.ai

    08/26/2025, 1:22 AM
    Based on the information in the knowledge sources, the error message you’re seeing—“MySQL Connector Error: The sync encountered an unexpected error in the change event producer and has stopped”—is commonly associated with issues in the MySQL CDC (Change Data Capture) process, particularly with binlog deserialization or communication failures between Airbyte and the MySQL server. Several possible root causes and troubleshooting steps are highlighted in the sources: 1. Binlog Deserialization Errors Many users have reported errors like
    Failed to deserialize data of EventHeaderV4
    or
    Failed to read next byte from position...
    , which are often related to issues with the MySQL binary log (binlog) configuration or corruption, or with how Airbyte/Dezebium reads the binlog. For example, if the binlog is purged or truncated while Airbyte is still reading from it, or if there is a schema change that is not properly handled, these errors can occur. - Check your MySQL binlog retention settings (
    binlog_expire_logs_seconds
    or, for RDS,
    binlog retention hours
    ) and ensure they are set high enough so that binlogs are not purged before Airbyte can read them. - If you see errors about missing
    TableMapEventData
    , it may indicate Airbyte is trying to read from a position in the binlog that is no longer valid or is mid-event, which can happen if the binlog was rotated or purged unexpectedly. - Schema changes in the source database can also cause CDC to fail if the connector is not reset or the schema is not refreshed after the change. [MySQL CDC Issue - Debezium error](https://github.com/airbytehq/airbyte/issues/51607), [MySQL Troubleshooting](https://docs.airbyte.com/integrations/sources/mysql/mysql-troubleshooting#troubleshooting) 2. MySQL Configuration Variables Some users have found that certain MySQL configuration variables, such as
    log_bin_use_v1_row_events
    , can affect CDC functionality. Setting this variable to
    OFF
    has resolved issues for some users. [MySQL CDC Issue - Debezium error](https://github.com/airbytehq/airbyte/issues/51607) 3. Connector and Airbyte Version Compatibility There are reports that certain issues only occur with specific versions of the MySQL source connector (e.g., 3.11.x) and not with earlier versions (e.g., 3.9.4). If you recently upgraded Airbyte or the connector, consider testing with a different version. [MySQL CDC Issue - Debezium error](https://github.com/airbytehq/airbyte/issues/51607) 4. General Troubleshooting Steps - Ensure your MySQL server version is supported (5.6, 5.7, 8.0, or 8.4). - Make sure the tables being replicated have primary keys for incremental sync. - If you see errors about missing binlogs, try syncing more frequently or increasing binlog retention. - If schema changes occurred, refresh the schema or reset the connection in Airbyte. - Review the Airbyte and connector logs for more specific error messages to pinpoint the issue. [MySQL Troubleshooting](https://docs.airbyte.com/integrations/sources/mysql/mysql-troubleshooting#troubleshooting) Summary of Next Steps: - Check your MySQL binlog retention and make sure it is set high enough. - Review for any recent schema changes and refresh/reset the Airbyte connection if needed. - Check the value of
    log_bin_use_v1_row_events
    and set it to
    OFF
    if it is
    ON
    . - Consider downgrading or upgrading your MySQL source connector if the issue started after a version change. - Examine the Airbyte logs for more detailed error messages to further diagnose the root cause. If you can provide the specific error details from the Airbyte logs, I can help you further narrow down the issue.
  • r

    Raihan Fathul Bayan

    08/26/2025, 1:23 AM
    Hi everyone, I’m migrating data from MySQL → BigQuery using Airbyte with CDC enabled. A few syncs worked fine before, but since Aug 25, 2025, I keep hitting this error:
    Failure in source: MySQL Connector Error: The sync encountered an unexpected error in the change event producer and has stopped. Please check the logs for details and troubleshoot accordingly.
    Mysql setup previleges:
    SELECT, INSERT, UPDATE, DELETE, RELOAD, INDEX, ALTER, SHOW DATABASES,
    CREATE, DROP, CREATE TEMPORARY TABLES, EXECUTE, CREATE VIEW, CREATE ROUTINE,
    EVENT, TRIGGER, REPLICATION SLAVE, BINLOG MONITOR
    1. What typically causes the “unexpected error in the change event producer” during CDC sync? 2. Is
    REPLICATION CLIENT
    privilege required in addition to
    REPLICATION SLAVE
    for Airbyte/Debezium to work reliably with MySQL binlogs? Thanks a lot for any advice! 🙏
    k
    • 2
    • 4
  • d

    Dimitris Samouil

    08/26/2025, 4:40 AM
    Hi team, After upgrading Airbyte from 1.41.0 to 1.7.2, we have noticed that some fields from our Iterable source are no longer being ingested into BigQuery. Specifically, for push events only fields like
    campaignId
    and
    workflowId
    are missing which results in incomplete data compared to what we had before the upgrade. Has anyone else run into this issue or is there a known fix? Thanks!
    k
    • 2
    • 1
  • a

    Abhay Kevat

    08/26/2025, 7:30 AM
    Hello Community, I originally installed Airbyte on EC2 using abctl by following the official setup guide here: https://docs.airbyte.com/platform/deploying-airbyte/abctl-ec2. Now I’d like to update Airbyte to the latest version, but I want to make sure I do this safely without disrupting my existing connections, sources, and destinations. Additionally, my Airbyte instance is running on a custom domain (e.g., abc.airbyte.com), so I’d like to know if there are any extra steps I should take into account during the upgrade. What’s the recommended approach to upgrade in this setup?
    k
    • 2
    • 1
  • v

    Vincent Lange

    08/26/2025, 9:03 AM
    my team wants to setup our ingestion pipelines as code, i.e. configure all sources, destinations and connections in code files, instead of relying on setting it up via the Airbyte UI narrowed down the options to either Airbyte API (via Python sdk) or Airbyte Terraform provider (which uses Airbyte API under the hood). PyAirbyte syntax wise looked the most convenient fo that initially, but doesnt seem to support interfacing directly with the actual aribyte platform (which we intend to run via https://docs.airbyte.com/platform/deploying-airbyte/abctl) wondering which of these will be the most supported going forward (since both didnt have updates for a while now), or whether there is another better way to do it happy to hear your opinions! octavia thinking
    k
    • 2
    • 3
  • y

    Yuvaraj Prem Kumar

    08/26/2025, 11:03 AM
    Anyone faced, this error with AWS secrets manager:
    Copy code
    Error: couldn't find key AWS_SECRET_MANAGER_ACCESS_KEY_ID in Secret /airbyte-config-secrets
    When using instanceProfile, you should not need to provide aws-secret-manager-access-key-id or aws-secret-manager-secret-access-key in your Kubernetes secret. If Airbyte still requests them, it may indicate a misconfiguration or a bug.
    My IAM role has access, and I already set authenticationType: instanceProfile
    k
    • 2
    • 1
  • j

    James Huang

    08/26/2025, 11:45 AM
    👋 Hello, team! Although, it says the abctl will handle the ingress after update the abctl, I noticed that the the ingress still point to the airbyte-abctl-airbyte-webapp-svc in v1.8.1. This has lead to 503 error`
    Copy code
    brew upgrade abctl
    k
    • 2
    • 4
  • a

    Alexander_mozilla

    08/26/2025, 2:35 PM
    👋 Hello, team!
    k
    • 2
    • 1
  • a

    Alexander_mozilla

    08/26/2025, 2:36 PM
    I'm getting issue with airbyte-temporal while connecting with postgreysql data base .. airbyte-temporal logs : showing nc bad address ' ' waiting for postgressql to startup
    k
    • 2
    • 1
  • a

    Alexander_mozilla

    08/26/2025, 2:38 PM
    airbyte-temporal: image: airbyte/temporal:0.63.1 environment: - DATABASE_HOST=active_stack_postgres - DATABASE_PORT=5432 - DATABASE_DB=temporal - DATABASE_USER=airbyte - DATABASE_PASSWORD=airbyte command: ["/usr/local/bin/wait-for-it.sh", "active_stack_postgres:5432", "--", "./update-and-start-temporal.sh"] volumes: - /home/vm01/lab_stack/wait-for-it.sh/usr/local/bin/wait for it.shro deploy: replicas: 1 restart_policy: condition: on-failure depends_on: - active_stack_postgres networks: - pipeline_net This is my airbyte service in docker yaml and I'm using docker swarm for network
    k
    • 2
    • 1
  • u

    문주은

    08/26/2025, 2:47 PM
    Hello, team. when I run 'abctl local credentials --email {email_address}', there's an error
    Copy code
    unable to udpate the email address: failed to update organization: unexpected status code: 401
    and I entered in web UI to enroll email address , they didn't setup if I check with 'airbyte local credentials', no set. and If I just entered into web UI login page in https://airbyte-domain.com, POST https://airbyte-domain.com/api/v1/users/get 401 Unauthrozied error. Also when I create source connection in airbyte, there's an HTTPS 504 error while Unauthroized error in abctl-airbyte-worker pod. How can I solve this issue?
    k
    • 2
    • 1
  • a

    Alexander_mozilla

    08/26/2025, 3:52 PM
    I'm getting issue with airbyte-temporal while connecting with postgreysql data base .. airbyte-temporal logs : showing nc bad address ' ' waiting for postgressql to startup airbyte-temporal: image: airbyte/temporal:0.63.1 environment: - DATABASE_HOST=active_stack_postgres - DATABASE_PORT=5432 - DATABASE_DB=temporal - DATABASE_USER=airbyte - DATABASE_PASSWORD=airbyte command: ["/usr/local/bin/wait-for-it.sh", "active_stack_postgres:5432", "--", "./update-and-start-temporal.sh"] volumes: - /home/vm01/lab_stack/wait-for-it.sh/usr/local/bin/wait for it.shro deploy: replicas: 1 restart_policy: condition: on-failure depends_on: - active_stack_postgres networks: - pipeline_net This is my airbyte service in docker yaml and I'm using docker swarm for network
    k
    • 2
    • 1
  • b

    Brandon Munoz

    08/26/2025, 6:08 PM
    Hey guys We’re currently testing WAL-based CDC using Airbyte 1.7.2 (deployed via Helm on EC2) with a PostgreSQL → Snowflake pipeline, and we’ve noticed some unexpected behavior. Every time a sync runs: • It takes about 1 minute to extract data • Then it idles for ~20 minutes before actually writing the data to Snowflake Interestingly, if a new row is inserted during that idle period, the sync resumes immediately and completes, skipping the remaining wait time. We found that the "Initial Waiting Time in Seconds" parameter was set to 1200 seconds (20 minutes), which explains the delay. However, it’s unclear why the sync needs to wait at all if there’s no new data, the replication slot should already indicate when no further changes are available no? Ideally, we want the sync to immediately capture all available changes at execution time, without waiting for an arbitrary timeout, similar to how it behaves when a new row is inserted mid-execution.
    k
    • 2
    • 1
  • i

    Ivan Barbosa Pinheiro

    08/26/2025, 7:26 PM
    I updated Airbyte, which is running on a VM on GCP, using abctl version 0.30.1. However, I'm now getting the following error when configuring a source: An unknown error occurred. (HTTP 504). I tried the following solution below, but it didn't work: https://github.com/szemek/troubleshooting/blob/main/airbyte.md#an-unknown-error-occurred-http-504
    k
    a
    +2
    • 5
    • 24
  • s

    Steve Ayers

    08/27/2025, 7:02 AM
    Hi everyone, Has anyone successfully migrated an existing docker compose install to the new abctl install. The -migrate flag was removed.
    k
    • 2
    • 1
  • b

    Bhaskar Saraogi

    08/27/2025, 8:48 AM
    I updated airbyte, using the following command
    abctl local install --values values.yaml --insecure-cookies
    and getting the below error, can anyone please help
    Copy code
    2025-08-27 07:48:46,474 [Thread-2]	ERROR	i.a.w.l.StartupApplicationEventListener(onApplicationEvent$lambda$1):43 - Failed to retrieve and resume claimed workloads, exiting.
              io.airbyte.api.client.ApiException: Unauthorized
              	at io.airbyte.workload.api.client.WorkloadApiClient.workloadList(WorkloadApiClient.kt:283)
              	at io.airbyte.workload.launcher.ClaimedProcessor.getWorkloadList$lambda$9(ClaimedProcessor.kt:112)
              	at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243)
              	at dev.failsafe.Functions.lambda$get$0(Functions.java:46)
              	at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74)
              	at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187)
              	at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376)
              	at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112)
              	at io.airbyte.workload.launcher.ClaimedProcessor.getWorkloadList(ClaimedProcessor.kt:110)
              	at io.airbyte.workload.launcher.ClaimedProcessor.retrieveAndProcess(ClaimedProcessor.kt:57)
              	at io.airbyte.workload.launcher.StartupApplicationEventListener.onApplicationEvent$lambda$1(StartupApplicationEventListener.kt:39)
              	at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
    k
    • 2
    • 5
  • n

    Narikorn (Beb) Phitagragsakul

    08/27/2025, 11:14 AM
    airbyte sync mssql error ERROR Pipeline Exception: io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workers.exception.ResourceConstraintException: Unable to start the REPLICATION pod. This may be due to insufficient system resources. Please check available resources and try again. message: io.airbyte.workers.exception.ResourceConstraintException: Unable to start the REPLICATION pod. This may be due to insufficient system resources. Please check available resources and try again. stackTrace: [Ljava.lang.StackTraceElement;@4d97f9e9
    k
    • 2
    • 1
  • b

    Bhaskar Saraogi

    08/27/2025, 11:45 AM
    Copy code
    Encountered an issue deploying Airbyte:
                Pod: airbyte-db-0.185f9b6036a0188d
                Reason: BackOff
                Message: Back-off restarting failed container airbyte-db-container in pod airbyte-db-0_airbyte-abctl(1b583244-e354-4cb4-8a4a-cef58ce8a620)
                Count: 141
                Logs: chown: /var/lib/postgresql/data/pgdata: Operation not permitted
              chmod: /var/lib/postgresql/data/pgdata: Operation not permitted
              The files belonging to this database system will be owned by user "postgres".
              This user must also own the server process.
    
              The database cluster will be initialized with locale "en_US.utf8".
              The default database encoding has accordingly been set to "UTF8".
              The default text search configuration will be set to "english".
    Has anyone encountered this before ?
    k
    • 2
    • 1
  • y

    Yuvaraj Prem Kumar

    08/27/2025, 3:04 PM
    Airbyte deployment on EKS via Helm, my under applications my client ID is blank. What setting am I missing?
  • a

    Anil Thapa

    08/27/2025, 4:46 PM
    Hello Team, When I am trying to test the google sheet builder with a particular google sheet. I am trying to hit "ctrl+enter" as instructed and it doesn't perform a test.
  • n

    Nandhika Prayoga

    08/27/2025, 4:54 PM
    hi guys, I was trying to sync my connection from mysql to clickhouse, but it ends up get stucked, here are the latest logs
    Copy code
    2025-08-27 23:39:31 info APPLY Stage: BUILD — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info APPLY Stage: CLAIM — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info Claimed: true for 5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync via API for 0ee4e467-c41b-4d73-b67b-0af4710a806e
    2025-08-27 23:39:31 info APPLY Stage: LOAD_SHED — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info APPLY Stage: CHECK_STATUS — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info No pod found running for workload 5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync
    2025-08-27 23:39:31 info APPLY Stage: MUTEX — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info Mutex key: 5ddf8bf2-29ee-4940-bd43-955f9582ccc6 specified for workload: 5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync. Attempting to delete existing pods...
    2025-08-27 23:39:31 info Existing pods for mutex key: 5ddf8bf2-29ee-4940-bd43-955f9582ccc6 deleted.
    2025-08-27 23:39:31 info APPLY Stage: LAUNCH — (workloadId=5ddf8bf2-29ee-4940-bd43-955f9582ccc6_4_0_sync)
    2025-08-27 23:39:31 info flag context: Multi(contexts=[Connection(key=5ddf8bf2-29ee-4940-bd43-955f9582ccc6)])
    2025-08-27 23:39:31 info [initContainer] image: airbyte/workload-init-container:1.6.6 resources: ResourceRequirements(claims=[], limits={memory=4Gi, cpu=3}, requests={memory=2Gi, cpu=2}, additionalProperties={})
    2025-08-27 23:39:31 info Launching replication pod: replication-job-4-attempt-0 with containers:
    2025-08-27 23:39:31 info [source] image: airbyte/source-mysql:3.50.6 resources: ResourceRequirements(claims=[], limits={memory=4Gi, cpu=3}, requests={memory=1Gi, cpu=1}, additionalProperties={})
    2025-08-27 23:39:31 info [destination] image: airbyte/destination-clickhouse:2.0.13 resources: ResourceRequirements(claims=[], limits={memory=4Gi, cpu=3}, requests={memory=1Gi, cpu=1}, additionalProperties={})
    2025-08-27 23:39:31 info [orchestrator] image: airbyte/container-orchestrator:1.6.6 resources: ResourceRequirements(claims=[], limits={memory=4Gi, cpu=3}, requests={memory=2Gi, cpu=2}, additionalProperties={})
    context: • it's my first time to run airbyte locally and setup connection • airbyte chart version: 1.6.6 ◦ installed by command
    sudo abctl local install --chart-version 1.6.6 --insecure-cookies
    ◦ I was installing 1.8.x (latest) but I got 504 error when try to setup source or destination, so I downgraded to version I believe more stable, which is 1.6.6 • running on gcp vm instance (debian 12) • my resource pretty much spare enough headroom to spawn other pods, take a look this pict for resources help me to figure out the issue guys 🙏
    ✅ 1
    p
    • 2
    • 3
1...241242243244245Latest