https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • g

    Gaurav Jain

    11/22/2025, 9:26 AM
    @kapa.ai What do you store in this folder root@airbyte-prod-new:~/.airbyte/abctl/data/airbyte-volume-db/pgdata/base/16384#
    k
    • 2
    • 4
  • b

    Bogdan

    11/24/2025, 9:53 AM
    Hi everyone, I'm using Airbyte for the first time via abctl, and I've encountered an issue with sync parallelism. It seems that each sync is waiting for the previous one to complete before starting, which is slowing down the process. Is there a way to configure Airbyte to allow multiple syncs to run in parallel?
    k
    • 2
    • 4
  • s

    Samy-Alexandre LICOUR

    11/24/2025, 11:50 AM
    Hello, I am trying to install Airbyte with abctl and logs in a GCP bucket.
    Copy code
    sudo abctl local install --secret secrets.yaml --values values.yaml
    Here are my values and secrets files: values.yaml
    Copy code
    global:
      storage:
        type: "GCS"
        secretName: airbyte-config-secrets
        bucket: # GCS bucket names that you've created. We recommend storing the following all in one bucket.
          log: dev-airbyte-abctl-logs-
          state: dev-airbyte-abctl-logs
          workloadOutput: dev-airbyte-abctl-logs
        gcs:
          projectId: source-plm-dev
          credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
      jobs:
        resources:
          ## Example:
           requests:
              memory: 8Gi
              cpu: 2
          # -- Job resource requests
          #requests: {}
          ## Example:
           limits:
              cpu: 4
              memory: 16Gi
          # -- Job resource limits
          #limits: {}
      auth:
        enabled: false
    secrets.yaml :
    Copy code
    apiVersion: v1
    kind: Secret
    metadata:
      name: airbyte-config-secrets
    type: Opaque
    stringData:
      gcp.json: "..."
    I am getting this error:
    Copy code
    Caused by: java.io.IOException: Is a directory
    There seems to be a problem with credentialsJsonPath: /secrets/gcs-log-creds/gcp.json even though I followed the documentation. Thanks for your help!
    k
    • 2
    • 7
  • s

    Slackbot

    11/25/2025, 7:36 AM
    This message was deleted.
    l
    k
    • 3
    • 2
  • k

    kapa.ai

    11/25/2025, 2:55 PM
    You’re connected to kapa.ai, an assistant focused specifically on Airbyte. Your last message only says “Hi guys,” without a question or problem description. If you have an Airbyte-related question (connectors, deployments, errors, features, etc.), please share the details and I’ll help based on the available Airbyte docs, GitHub issues, and forum posts.
  • m

    Mihály Dombi

    11/25/2025, 2:55 PM
    Hello Team! I'm trying to add a new stream to a connector where the Retrieval type is Asynchronous Job. There is a creation endpoint that returns a Location (URL with
    report_task_id
    ) in the response header that can be polled to check its status. The problem is that this Location is in the response header but the response doesn't have a payload. Because of this the _get_creation_response_interpolation_context is always failing. Also in the connector builder the HTTP Response Format section doesn't have an option that tells that the response doesn't have a body. Is there a workaround for this, or the CDK should be extended with this option? The report creation endpoint in question: https://developer.ebay.com/api-docs/sell/marketing/resources/ad_report_task/methods/createReportTask#request.dateFrom
    k
    h
    • 3
    • 2
  • l

    Lucas Segers

    11/25/2025, 2:56 PM
    Hey guys, anyone with any "Plain-text" Http Requester on a custom Builder facing any issues after 2.0? I find it hard that I'm the only one that has broken PlainText streams after upgrading, so I must be missing something? 😛
    k
    • 2
    • 1
  • d

    Dyllan Pascoe

    11/25/2025, 9:04 PM
    Greetings. Is there a way to delete a builder connector from my organization once it's been published?
    k
    • 2
    • 2
  • r

    Rob Kwark

    11/26/2025, 1:39 AM
    I am trying to do a create connection api call for Mysql -> s3 on OSS, and it works when I use the api call for my other connections (snowflake, postgres) when I use the same flow. However, I am getting a { "status": 500, "type": "https://reference.airbyte.com/reference/errors", "title": "unexpected-problem", "detail": "An unexpected problem has occurred. If this is an error that needs to be addressed, please submit a pull request or github issue.", "documentationUrl": null, "data": { "message": "Something went wrong in the connector. logs:Workload failed, source: airbyte_platform" } } when I make the basic post request: { "name": "badump1", "sourceId": "sourceUUID", "destinationId": "destUUID", "schedule": { "scheduleType": "manual" } }. This is only happening with the mysql connection.
    k
    • 2
    • 10
  • k

    Kevin O'Keefe

    11/26/2025, 4:00 AM
    Hello Airbyte community. I am new here and have set up our first Airbyte community edition deployment. I want to reach out to the community and ask if you have any helpful suggestions for optimizing the environment that you may have discovered during your use of the product. I am on version 1.9.1 of the Helm deployment to K8s using ArgoCD, and things are running well at the moment. If anyone is willing to share any helpful items related to your experience and setup woes, please let me know. Thank you for your attention and participation.
    k
    j
    • 3
    • 2
  • j

    JadperNL

    11/26/2025, 9:03 AM
    Hi is there any methods to install Airbyte without kubernetes? All our systems cannot run kubernetes in our case we currently make use of docker compose for any other deployments.
    k
    • 2
    • 1
  • i

    Isaac Steele

    11/26/2025, 3:45 PM
    I am getting different responses when I use
    requests.get()
    and the python
    airbyte-api
    on sources for my source configuration information. For example, when I get run a
    GET
    on
    ...v1/sources
    , my sources have connection information like
    configuration":{"auth_type":"Client","client_id":"myclient123456789","is_sandbox":true,"client_secret":"**********","refresh_token":"**********","streams_criteria":[],"stream_slice_step":"P30D","force_use_bulk_api":false}
    . But when I use the python API
    get_source()
    or
    list_sources()
    the configuration information returned is just:
    configuration=SourceAirtable(credentials=None, SOURCE_TYPE=<SourceAirtableAirtable.AIRTABLE: 'airtable'>)
    . This is an example from my salesforce connector, but the same is true for my other connectors as well, just giving the
    Airtable
    for configuration on all of them. Is this a python-api bug? How can I get/view my masked source configuration data from the python api?
    k
    • 2
    • 1
  • n

    Nikita

    11/26/2025, 4:36 PM
    Hello! I'm trying to establish export from GA4 to Clickhouse. This is screenshot from the traffic_sources table. Is there a way that in the endDate and startDate we can also include time? Thanks!
  • f

    Francis Carr

    11/26/2025, 5:00 PM
    Hey all. I am experiencing timeout issues when transferring data from Snowflake. I am currently on the official Snowflake Source
    airbyte/source-snowflake
    on version
    1.0.8
    . The source table within Snowflake is very large and it seems I am hitting a credentials timeout of around 6 hours. What seems to be happening; 1. Airbyte makes a query for all data from the Snowflake table and this is staged internally within an S3 bucket 2. Airbyte then starts transferring this data out of the bucket piece-by-piece sending it from the Source to the Destination, the destination being BigQuery 3. After 6 hours or so, sometimes longer, we run into a
    403 Forbidden
    on reading the data and the job within Airbyte runs another attempt. a. After 5 attemps of around 6 hours it gives up and reports it as a failure. b. When the job starts again we seem to start from the very beginning; even though the job has written a lot of records to the BigQuery destination, it doesn't seem to save where it has got to in the Source. The sync mode is
    Incremental | Append
    with a Date as the Cursor Has anyone had any similar experiences with this type of Snowflake timeout? It sounds like it could be something that can be configured on the Snowflake side, but we aren't sure what configuration we could use to control this. There is documentation around the 6 hour timeout in the Using Persisted Query Results from Snowflake but it doesn't really say much beyond
    A new token can be retrieved to access results while they are still in cache.
    We are also not sure what we can do on our side as this seems to be internal to the JDBC driver that reads from Snowflake. Maybe there is a JDBC query param we could add? We know we can break up this data and there are lots of workarounds like that, but it still avoids the core issue that can arise again. Any recommendations would be greatly appreciated! 🙏
    k
    • 2
    • 3
  • a

    Alex Tasioulis

    11/26/2025, 5:27 PM
    hi, I am exporting from MongoDB into Snowflake - everything works fine for smaller tables. I have some tables that are a bit too big in MongoDB and I am worried they will take like 2 days to copy over by which point the MongoDB oplog will have expired and incremental syncing would fail. Is that what will happen if initial sync takes longer than oplog window or does Airbyte handle this gracefully? If not, what can I do to deal with it?
    k
    • 2
    • 1
  • b

    Baudilio García Hernández

    11/26/2025, 5:54 PM
    185 - Connector entitlement not available. actorDefinitionId=c0b24000-d34d-b33f-fea7-6b96dc0e5f0d organizationId=OrganizationId(value=00000000-0000-0000-0000-000000000000)
    k
    • 2
    • 1
  • c

    Chanakya Pendem

    11/27/2025, 8:46 AM
    Hi, I am working on creating an api with which I can create a source, but I need to build the connector first. Is there a way to make an api for building, as publish api gives sourcedefinitionID as output, which we can further use to create a source.
    k
    • 2
    • 1
  • l

    Linas Leščinskas

    11/27/2025, 12:51 PM
    hello! I was running my Airbyte instance hosted on GCP VM successfully for a year. Until certs expired and the syncs stopped running. To renew the certs, reinstalled the Airbyte with
    sudo abctl local uninstall
    and
    sudo abctl local install
    . In the process Airbyte was upgraded from 0.22x to 0.3x. However, I still cannot run any syncs. When I run a sync, it just hangs indefinitely, generating zero logs. When I try to replicate the connection, I encounter
    Airbyte is temporarily unavailable. Please try again. (HTTP 502)
    in the Select streams stage. I tried kicking workload launchers and restarting cluster, but this yields no tangible results. P. S. I have made a machine image so I can start anew on a fresh VM (with expired certs). Any guidance appreciated!
    k
    y
    • 3
    • 3
  • f

    Fernand Ramat

    11/27/2025, 7:29 PM
    Hello, I am trying to run an historical data load of my mongo cluster on one collection which is quite huge (more than 50 millions rows). but at some point the sync is failing; I am trying to refresh from there, but got the following message on ettempt 3,4,5 ... 7:
    io.airbyte.cdk.TransientErrorException: Input was fully read, but some streams did not receive a terminal stream status message. If the destination did not encounter other errors, this likely indicates an error in the source or platform. Streams without a status message: [import_mongodb_hh_production_product2_v2 ......]
    I am not sure how I could solve this pipeline
    k
    • 2
    • 1
  • a

    Alex Johnson

    11/28/2025, 3:32 AM
    Hi Team, I have configured the Xero connector using AirByte open source on Ubuntu. The source configures correctly and appears to authenticate using OAuth. I have successfully setup the connection and the streams sync but always return 0 records. Is there a fix for this?
    k
    • 2
    • 2
  • a

    Alex Johnson

    11/28/2025, 5:28 AM
    Zoho Inventory Connector - No Organisation ID? Hey Guys, Does anyone know why the Zoho Inventory connector does not use the organisation ID? I use multiple organisations so this is a required variable in all API calls. Is there a plan to include this in the future? Thanks! Alex
    k
    • 2
    • 1
  • p

    Prabhu Agarwal

    11/28/2025, 8:53 AM
    Hello, I am facing issue while using the custom decoder component. I have setup the custom component in builder UI and using it for one of the stream, and when I run this stream, its working fine in connector builder UI 2.0, but the rest of the streams which are not using the custom component, getting error while running them in builder UI. Could not load module
    source_declarative_manifest.components
    . I have checked for
    source_declarative_manifest
    traces in the other streams but its not the case. Not sure why this error is coming. How can I fix this issue? any leads would be highly appreciated. Thanks
    k
    • 2
    • 4
  • l

    Leon Kozlowski

    11/29/2025, 4:09 PM
    I'm using RDS for my external database for deploying self hosted airbyte - are there any minimum requirements for external database? I'm running a t3.large and my CPU is pinned at 100%, temporal is holding 150 idle connections - does anyone know what the min RDS size I should use?
    k
    • 2
    • 4
  • s

    Sam Riggleman

    11/29/2025, 5:37 PM
    I get this far in setting up a connection for the first time, but then it always hangs. All the tests pass in the configuration screen.
    h
    • 2
    • 2
  • p

    Piyush Shakya

    12/01/2025, 9:34 AM
    has any one faced this kind of issue before ? Init container error encountered while processing workload for id: 85283132-f3b9-470d-847f-cea2c0a57301_29938_4_check. Encountered exception of type: class io.micronaut.data.connection.jdbc.exceptions.CannotGetJdbcConnectionException. Exception message: Failed to obtain JDBC Connection.
    k
    • 2
    • 1
  • c

    Corentin Marin

    12/01/2025, 10:25 AM
    Hi everyone! My Team is trying to ingest Apple Search Ads data within Snowflake using the following connector; for now we are mainly interested in getting spend at campaign, ad group & keyword levels . We managed to create the connection between Apple Ads and Snowflake, however we are not able to recover the full spend data history; it actually seems that we can recover it fully only for the day - 1 of running the refresh. Has anyone faced that kind of issue and managed to find a way to recover the full history? Thanks for your help!
    k
    h
    • 3
    • 2
  • n

    Nadya Niukina

    12/02/2025, 6:26 AM
    Hello, I deployed Airbyte on Kubernetes using the helm chart and the terraform provider (to configure sources and destinations). From what I understand based on issues #39528 and #63772, it’s not possible to create a new application in the Core version (contrary to documentation) Is there any way to rename the existing application (
    Default User Application
    )? Thanks!
  • a

    Aswin

    12/02/2025, 1:38 PM
    Hi , Could someone clarify whether the free/open-source edition of Airbyte imposes any limits on database size, data-transfer volume, or the number of connections? Additionally, are there any architectural or operational constraints that commonly affect large-scale migrations?
    k
    j
    • 3
    • 2
  • d

    Dana Williams

    12/02/2025, 9:32 PM
    Is anyone having issues with there self hosted Airbyte? I have every connector failing with error: Warning from source: Check took too long. Check exceeded the timeout. The Ai tool is saying its a platform issue, but nothing posted on Airbyte status page.
    k
    j
    j
    • 4
    • 4
  • y

    Yuki Kakegawa

    12/03/2025, 12:02 AM
    Does anyone know how to trigger "refresh stream" on a schedule? Via the API or the UI
    k
    • 2
    • 1