https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • a

    Abe Azam

    05/05/2025, 8:34 AM
    Hey everyone, just wanted to share my working setup for Airbyte OSS 1.6.1 since I couldn’t find explicit guidance anywhere. I’m using the latest Shopify source connector and Redshift destination. After testing a few EC2 instance types, I found that m3.2xlarge is the smallest instance that worked reliably for a reasonably large Shopify account. I tried starting from m3.medium, but only got it running successfully on the m3.2xlarge. Hope this helps anyone running into similar resource issues!
  • a

    Antoine Roncoroni

    05/05/2025, 9:39 AM
    Hi everyone, I have airbyte 1.3.0 deployed using abctl on an EC2 instance. I have a s3 (4.8.4) <> Snowflake (3.11.8) connection, which reads jsonl files. It: • runs in ~5' in Full Refresh mode • fails with a
    Terminating due to java.lang.OutOfMemoryError: Java heap space
    error and runs endlessly in the UI in Incremental mode I see that the
    airbyte_internal
    table does have data, but not the final table I tried tuning CPU/memory limits/requests at the connection-level, which didn’t work Any idea how I can fix this ?
  • y

    Yasser Osama

    05/05/2025, 10:05 AM
    Hello everyone , we are using a mysql connector and cdc replication to our data warehouse , but we are getting this error
    Copy code
    Failure in source: Incumbent CDC state is invalid, reason: Saved offset no longer present on the server, please reset the connection, and then increase binlog retention and/or increase sync frequency. Connector last known binlog file mysql-bin-changelog.001213 is not found in the server. Server has [mysql-bin-changelog.001406, mysql-bin-changelog.001407, mysql-bin-changelog.001408].
    r
    k
    • 3
    • 4
  • e

    Emmanuel Emussu

    05/05/2025, 10:32 AM
    Hi all, i've installed Airbyte locally using abctl, my Mysql to mysql connector is setup however it takes too long (24+ hours) to sync tables about 5GB in size. Is there anything that can be done to speed it up? Is it memory issue? How do i check and increase memory if needed, i have over 128G available.
    r
    • 2
    • 1
  • a

    Atharva Pandit (AP)

    05/05/2025, 5:07 PM
    Hi all, I am using Airbyte Cloud, and I just wanted to know if there are any limits to how many connections I can sync in parallel on the account. Any help would be appreciated!!!
    m
    • 2
    • 4
  • s

    Sree Shanthan Kuthuru

    05/06/2025, 3:46 AM
    Hello, I am using Airbyte 1.6.1 locally deployed in EC2 instance via abctl. The database connections are working fine but when I try to setup Google Search Console as a source via Service Account key Authentication it redirects me to below error.
    Copy code
    Sorry, something went wrong.
    Minified React error #185; visit <https://reactjs.org/docs/error-decoder.html?invariant=185> for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
    Copy code
    // Error
    {}
    
    
    Error: Minified React error #185; visit 
    <https://reactjs.org/docs/error-decoder.html?invariant=185> for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
        at vs (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:33862>)
        at wse (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:39:24436>)
        at Object.<anonymous> (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:291:75880>)
        at Object.registerOption (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:284:13902>)
        at <http://10.192.20.8:8000/assets/core-lwvfr41579.js:291:82640>
        at hx (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:24296>)
        at FD (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31641>)
        at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31497>)
        at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31405>)
        at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31405>)
  • d

    Darko Macoritto

    05/06/2025, 7:35 AM
    Hello dear community. I am trying to sync some data from Postgres to BigQuery. At the moment, I have tried the sync a few times, but it has never managed to finish. The amount of data I want to sync is pretty large (around 1m rows, 13 columns). The sync takes hours and never finishes. During my last sync I got this at the end of my logs:
    Copy code
    2025-05-05 20:22:35 destination INFO pool-3-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 1.5 GB, allocated: 10 MB (9.99921989440918 MB), %% used: 0.006509908785422643 | Queue `profiles_buyer_properties`, num records: 0, num bytes: 0 bytes, allocated bytes: 0 bytes | State Manager memory usage: Allocated: 9 MB, Used: -818 bytes, percentage Used -7.801664520414133E-5
    2025-05-05 20:22:35 destination INFO pool-6-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):129 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
    2025-05-05 20:38:38 replication-orchestrator INFO thread status... heartbeat thread: true , replication thread: false
    2025-05-05 20:38:38 replication-orchestrator INFO Do not terminate as feature flag is disable
    I came back at 8 am this morning, the sync was still there. Importantly, Airbyte is self-hosted and runs on a machine with the following infrastructure: 8 GB RAM / 4 CPU. This is a good machine according to the Airbyte official documentation. So here are my questions: 1. Do you see any solution which could make this work without upgrading my machine ? 2. If 1. does not work, maybe I could sync the data using another solution (such as a python script) and just add the new lines through airbyte (I need a Incremental | Append + Deduped). This would reduce dramatically the amount of data to deal with per snyc. Has anyone tried this ? Is this possible ? 3. Any other idea is welcome Please let me know if you want to look at the full logs Thanks for your help.
    p
    • 2
    • 6
  • a

    Azfer Pervaiz

    05/06/2025, 8:38 AM
    Hi everyone, I'm encountering an issue with how the
    CurrencyType
    Standard/Setup object of Salesforce, The connection was built successfully, but the data is not syncing, even though it creates airbyte internal table and normalised table in the destination correctly. Has anyone experience this issue?
  • a

    Antoine Roncoroni

    05/06/2025, 10:02 AM
    Hey! is it possible to refresh multiple streams at once ? Currently, It looks like we can either: • refresh the entire connection • or refresh 1 stream at a time Since the “refresh 1 stream at a time” option triggers a full sync of the connection, I’d prefer not to trigger 10 separate syncs just to refresh 10 streams. Is there a way to batch refresh specific streams? Thanks!
  • n

    Namratha D

    05/06/2025, 10:06 AM
    I need to get data from public api key without using custom connectors in the pre built source connectors in airbyte oss how is it possible ?
  • a

    Alberto

    05/06/2025, 11:34 AM
    Trying to use API to make a connector with gdrive source and s3 destination and received this:
    Copy code
    {
      "message": "Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379",
      "exceptionClassName": "io.lettuce.core.RedisConnectionException",
      "exceptionStack": [],
      "rootCauseExceptionStack": []
    }
  • m

    Michael l

    05/06/2025, 1:57 PM
    Hi team, We are using Airbyte to extract data from Hubspot to Bigquery. All works well, however it occasionally we see huge spikes causing terabytes of data to be processed and ramping up costs in Bigquery. One of the causes appears to be this sql:
    Copy code
    DROP TABLE IF EXISTS `...`.`hubspot_airbyte`.`engagements_emails_ab_soft_reset`;
    
    CREATE OR REPLACE TABLE `...`.`hubspot_airbyte`.`engagements_emails_ab_soft_reset` (
    _airbyte_raw_id STRING NOT NULL,
    ....<snip>...
    
    UPDATE `...`.`airbyte_internal`.`hubspot_airbyte_raw__stream_engagements_emails` SET _airbyte_loaded_at = NULL WHERE 1=1;
    The table size is: • Total physical bytes 6.02 GB • Active physical bytes 6.02 GB However the analysis causes a 4.6tb analysis in Bigquery. 1. After the DROP then CREATE why does Airbyte perform this update? 2. Is there anything we can do to avoid this?
  • t

    Théo

    05/06/2025, 3:52 PM
    Hi everyone I have an issue and cannot find how to fix it. I have a source-s3 that seems to work well when tested with the test button and a destination-s3 that works well too with the test button. However when I try to set a connection between the source and destination, it's failing with this error :
    Copy code
    2025-05-06 15:23:10,396 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - io.airbyte.cdk.ConfigErrorException: Failed to initialize connector operation
    2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.cdk.AirbyteConnectorRunnable.run(AirbyteConnectorRunnable.kt:31)
    2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.executeUserObject(CommandLine.java:2030)
    2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.access$1500(CommandLine.java:148)
    2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2465)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.handle(CommandLine.java:2457)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.handle(CommandLine.java:2419)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2277)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.execute(CommandLine.java:2421)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.execute(CommandLine.java:2174)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.cdk.AirbyteDestinationRunner$Companion.run(AirbyteConnectorRunner.kt:286)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.integrations.destination.s3_v2.S3V2Destination$Companion.main(S3V2Destination.kt:16)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.integrations.destination.s3_v2.S3V2Destination.main(S3V2Destination.kt)
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - Caused by: io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.cdk.load.task.DefaultDestinationTaskLauncher]
    2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 -
    2025-05-06 15:23:10,399 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - Path Taken: new WriteOperation(TaskLauncher taskLauncher,SyncManager syncManager) --> new WriteOperation([TaskLauncher taskLauncher],SyncManager syncManager) --> new DefaultDestinationTaskLauncher(TaskScopeProvider taskScopeProvider,[DestinationCatalog catalog],DestinationConfiguration config,SyncManager syncManager,InputConsumerTaskFactory inputConsumerTaskFactory,SpillToDiskTaskFactory spillToDiskTaskFactory,FlushTickTask flushTickTask,SetupTaskFactory setupTaskFactory,OpenStreamTaskFactory openStreamTaskFactory,ProcessRecordsTaskFactory processRecordsTaskFactory,ProcessFileTaskFactory processFileTaskFactory,ProcessBatchTaskFactory processBatchTaskFactory,CloseStreamTaskFactory closeStreamTaskFactory,TeardownTaskFactory teardownTaskFactory,FlushCheckpointsTaskFactory flushCheckpointsTaskFactory,UpdateCheckpointsTask updateCheckpointsTask,FailStreamTaskFactory failStreamTaskFactory,FailSyncTaskFactory failSyncTaskFactory,boolean fileTransferEnabled,ReservingDeserializingInputFlow inputFlow,MessageQueueSupplier<Descriptor K, Reserved<DestinationStreamEvent T> T> recordQueueSupplier,QueueWriter<Reserved<CheckpointMessageWrapped T> T> checkpointQueue,MessageQueue<FileTransferQueueMessage T> fileTransferQueue,MessageQueue<DestinationStream T> openStreamQueue)
    It seems that my destination connector is failing for whatever reason. ``````
    p
    • 2
    • 4
  • a

    Arpit Nath

    05/07/2025, 5:22 AM
    Hello, We are trying Airbyte Cloud for the first time and while testing OAuth-based source setup using the
    /v1/sources/initiateOAuth
    API, we’re consistently hitting the following error:
    Copy code
    "message": "Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379",
    "exceptionClassName": "io.lettuce.core.RedisConnectionException"
    • We’ve verified our payload, redirect URI, and network conditions. • This seems like an internal Redis issue from Airbyte Cloud’s side, not a user configuration problem. following the documented “Use Airbyte credentials to authenticate” OAuth flow. However, across multiple source types : including Google Ads, Facebook Ads or Airtable , every call to initiateOAuth returns a 500 Internal Server Error with this message: This has been blocking us from progressing any further with integration. It’s been several days but haven’t received a resolution yet. Can someone please help investigate this ASAP or provide a workaround? Continuation thread: https://airbytehq.slack.com/archives/C01AHCD885S/p1746262156456959 Thank you.
    u
    o
    a
    • 4
    • 13
  • p

    Pavan Kalyan Chitturi

    05/07/2025, 6:41 AM
    Title: Airbyte 0.63.13 - High CPU Usage & docker-proxy Overload Even Without Other Projects Body: Hi Airbyte team, I'm running Airbyte version 0.63.13 via Docker, and I'm experiencing severe performance issues even without running any other projects on the VM. Issue Summary: The CPU usage spikes to 100% shortly after startup Many docker-proxy processes are spawned (over 20) The load average remains high (>5.0) even when idle No other workloads are running — only Airbyte VS Code Server is occasionally used but not the root cause VM Environment: OS: Ubuntu 24.04 vCPUs: 4 RAM: 16 GB Disk: 125 GB SSD Docker Version: 27.2.0 Airbyte Version: 0.63.13 Deployment: Docker (not Kubernetes) Troubleshooting Done: Checked logs from airbyte-server, airbyte-worker, airbyte-temporal, and airbyte-proxy — no clear error messages Confirmed that even basic sync jobs or idle time causes high CPU Confirmed that .vscode-server is not the culprit Cleaned up old containers, images, and volumes Static ports are defined in docker-compose to avoid random port binding Request: Is this a known performance issue in Airbyte 0.63.13? Is my VM underpowered, or is there a way to tune Airbyte for low-resource environments? Any recommended flags/configs to reduce CPU load (e.g., Temporal tuning, worker limits)? Thanks in advance!
  • d

    Dheeraj Soni

    05/07/2025, 8:40 AM
    Hello team, I'm trying to create a custom connector for an HTTP endpoint and encountering a
    400 Bad Request
    error. I've verified the base URL, URL path, and authentication credentials — all are correct. I'm able to successfully fetch the response using Postman, but it fails during testing in Airbyte. Interestingly, with the same base URL, when I use the URL path
    /api/v2/incremental/tickets/cursor
    , it works. However, it fails with
    /api/v2/ticket_audits.json
    . The stack trace of the error is included in the snippet below.
    Untitled
    p
    • 2
    • 4
  • p

    Piotr Strugacz

    05/07/2025, 12:19 PM
    Hello team, we have critical production issue with cloud deployment of Airbyte while trying to sync data using
    airbyte/destination-redshift 3.5.3.
    connector. The sync process seems to failing due to heartbeat functionality timing out for Redshift destination. Redshift is configured to run on
    ra3.large
    nodes if that matters. Greatly appreciate if anyone has idea how to either disable heartbeat for this connector or change it timeout settings in cloud env. The error.
    Copy code
    INFO Failures: [ {
      "failureOrigin" : "airbyte_platform",
      "internalMessage" : "Workload Heartbeat Error",
      "externalMessage" : "Workload Heartbeat Error",
      "metadata" : {
        "attemptNumber" : 0,
        "jobId" : 35754981
      },
      "stacktrace" : "io.airbyte.workers.exception.WorkloadHeartbeatException: Workload Heartbeat Error\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.checkIfExpiredAndMarkSyncStateAsFailed(WorkloadHeartbeatSender.kt:117)\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.sendHeartbeat(WorkloadHeartbeatSender.kt:55)\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender$sendHeartbeat$1.invokeSuspend(WorkloadHeartbeatSender.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:586)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:829)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:717)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:704)\nCaused by: io.airbyte.container.orchestrator.worker.io.DestinationTimeoutMonitor$TimeoutException: Last action 5 minutes 35 seconds ago, exceeding the threshold of 3 minutes.\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.sendHeartbeat(WorkloadHeartbeatSender.kt:58)\n\t... 7 more\n",
      "timestamp" : 1746612535019
    }, {
      "failureOrigin" : "replication",
      "internalMessage" : "No exit code found.",
      "externalMessage" : "Something went wrong during replication",
      "metadata" : {
        "attemptNumber" : 0,
        "jobId" : 35754981
      },
      "stacktrace" : "java.lang.IllegalStateException: No exit code found.\n\tat io.airbyte.container.orchestrator.worker.io.ContainerIOHandle.getExitCode(ContainerIOHandle.kt:101)\n\tat io.airbyte.container.orchestrator.worker.io.LocalContainerAirbyteSource.getExitValue(LocalContainerAirbyteSource.kt:89)\n\tat io.airbyte.container.orchestrator.worker.DestinationWriter.run(DestinationWriter.kt:35)\n\tat io.airbyte.container.orchestrator.worker.DestinationWriter$run$1.invokeSuspend(DestinationWriter.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n",
      "timestamp" : 1746612699270
    } ]
  • s

    Sabrina Lanzotti

    05/07/2025, 12:49 PM
    Hello team, Im having issues connecting a public S3 bucket with Airbyte s3 connector:
    Copy code
    2025-05-07 09:43:46 info 
    2025-05-07 09:43:46 info ----- START CHECK -----
    2025-05-07 09:43:46 info 
    2025-05-07 09:43:48 info Connector exited, processing output
    2025-05-07 09:43:48 info Output file jobOutput.json found
    2025-05-07 09:43:48 info Connector exited with exit code 0
    2025-05-07 09:43:48 info Reading messages from protocol version 0.2.0
    2025-05-07 09:43:48 error Check failed
    2025-05-07 09:43:48 info Checking for optional control message...
    2025-05-07 09:43:48 info Optional control message not found. Skipping...
    2025-05-07 09:43:48 info Writing output of 69589781-7828-43c5-9f63-8925b1c1ccc2_f681199d-63e9-41c2-95d7-1bd15173b340_0_check to the doc store
    2025-05-07 09:43:48 info Marking workload 69589781-7828-43c5-9f63-8925b1c1ccc2_f681199d-63e9-41c2-95d7-1bd15173b340_0_check as successful
    2025-05-07 09:43:49 info 
    2025-05-07 09:43:49 info Deliberately exiting process with code 0.
    2025-05-07 09:43:49 info ----- END CHECK -----
    2025-05-07 09:43:49 info
    I've already checked and the files are accesible without any credentials... is there a particular way to set this up?
  • a

    Abhishek Kumar

    05/07/2025, 1:33 PM
    Hi Everyone, I'm looking for github commit-id/tag for 1.5.1 version as i can't find it on airbyte github. This is needed for our internal org security vulnerabilities evaluation. can someone help in pointing the same ?
    u
    • 2
    • 2
  • f

    Fabian Boerner

    05/07/2025, 2:13 PM
    Hi, what can i do when im getting this error: Warning from source: Refresh Schema took too long. it runs for 2 h 39 minutes and then the sync crashes, can i disable refreshing the schema for every sync?
    • 1
    • 1
  • d

    Durim Gashi

    05/07/2025, 2:17 PM
    Hey everyone, I am getting the following error on my Postgres -> Redshift connections. I have two connections, one syncs less than 1 MB of data and the other one syncs ~15 MB. They have been working great so far, however from today I am getting this error. I would appreciate any insights you might have on this. Thanks message='Airbyte could not track the sync progress. Sync process exited without reporting status.', type='io.airbyte.workers.exception.WorkloadMonitorException', nonRetryable=false
    • 1
    • 1
  • m

    Moe Hein Aung

    05/07/2025, 2:44 PM
    Hi everyone, I started seeing this error on my Airbyte server hosted on K8s (EKS)
    Workload failed, source: workload-monitor-heartbeat Airbyte could not track the sync progress. Sync process exited without reporting status
    I did research online this seems to be because the storage for MinIO logs ran out of space. As suggested on this github issue, I set about deleting and re-creating this MinIO pod and PVs attached to it as a fix. I started by running the following:
    Copy code
    kubectl get statefulset -n airbyte
    kubectl get pvc -n airbyte
    kubectl get pv -n airbyte
    Then deleted them all successfully:
    Copy code
    kubectl delete statefulset airbyte-minio -n airbyte
    kubectl delete pvc airbyte-minio-pv-claim-airbyte-minio-0 -n airbyte
    kubectl delete pv pvc-26a02143-f688-4674-a49f-1335e8c74cca
    In the process I also did helm repo update to update from 1.5.1 to 1.6.1 then tried helm upgrade:
    Copy code
    helm upgrade --install airbyte airbyte/airbyte --version 1.6.1 -n airbyte -f values.yaml --debug
    However, it gets stuck with
    Pod airbyte-minio-create-bucket running
    and minio pod does not get created. This is part of my helm chart:
    Copy code
    # for v1.6.1
    logs:
      storage:
        type: minio
    
      minio:                 
        enabled: true
        persistence:
          enabled: true
          size: 20Gi
          storageClass: gp2
    
    # old for v1.5.1 commented out
    # minio:
    #   persistence:
    #     enabled: true
    #     size: 20Gi
    #     storageClass: "gp2"
    Can anyone please help on how I can re-create my minio pod? The web server is still up and running but I cannot run any sync jobs until this is fixed.
    p
    • 2
    • 35
  • o

    Owen

    05/07/2025, 3:07 PM
    Hi, the AIrbyte Cloud API appears to have an issue:
    Copy code
    {
      success: false,
      error: 'HTTP request failed with status code 500: {"message":"Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379","exceptionClassName":"io.lettuce.core.RedisConnectionException","exceptionStack":[],"rootCauseExceptionStack":[]}'
    }
    This occurs when hitting the endpoint to initiate oAuth for any source
    u
    • 2
    • 1
  • l

    Leandro Ricardo Bologna Ferreira

    05/07/2025, 6:10 PM
    Good afternoon everyone, how are you? I have the POD airbyte-abctl-workload-launcher-855fc5b59f-29mbj with the CreateContainerConfigError problem as shown in the image below from k9s. Attached is the error LOG.
    airbyte-abctl-airbyte-abctl-workload-launcher-855fc5b59f-29mbj-1746641226011118033.log
    p
    • 2
    • 1
  • b

    badri

    05/07/2025, 9:19 PM
    team , can i override
    dataplane
    secrets using airbyte-auth-secrets ? airbyte
    bootloader
    is replacing the dataplane credentials .. how to get over it ?
  • o

    Omar García Ortíz

    05/07/2025, 9:51 PM
    Hi, is there a way to always install the same version of Airbyte? I tried using values.yaml, but it didn't seem to work. I need to set the version for a production environment on EC2. Thanks.
    p
    • 2
    • 1
  • m

    Michael Johnsey

    05/07/2025, 10:17 PM
    Anyone else using Hubspot connector in Airbyte cloud? Deploy of the new Hubspot connector this morning has caused it to fail for us every run with an error of
    ValueError: No format in ['%ms', '%ms'] matching
    on a timestamp field but I can't figure out if it is stream specific and it's a stream we can deactivate to get it flowing again or if it's global to all the Hubspot streams (wouldn't expect that). I'll put the full error message in the thread
    m
    • 2
    • 46
  • h

    Hakim Saifee

    05/08/2025, 9:32 AM
    Hi , Where can i find airbyte release github tag 1.5.1? I need source code for the blackduck scan for 1.5.1
  • k

    kanzari soumaya

    05/08/2025, 10:49 AM
    Hi , I m trying to connect from mongoDB self managed replicaset source . I created two nods with those address : mongodb://localhost:27080,localhost:27081/?replicaSet=rs0&readPreference=primary . I have my database imported in MongoDB compass with collections . But when I try to import source in Airbyte, I get this error message : Configuration check failed Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=localhost:27081, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}, {address=localhost:27080, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}] .at com.mongodb.client.internal.MongoDatabaseImpl.executeCommand(MongoDatabaseImpl.java:196) at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:165) at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:160) at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:150) at io.airbyte.integrations.source.mongodb.MongoUtil.getAuthorizedCollections(MongoUtil.java:94) at io.airbyte.integrations.source.mongodb.MongoDbSource.check(MongoDbSource.java:68) at io.airbyte.cdk.integrations.base.IntegrationRunner.runInternal(IntegrationRunner.kt:166) at io.airbyte.cdk.integrations.base.IntegrationRunner.run(IntegrationRunner.kt:119) at io.airbyte.cdk.integrations.base.IntegrationRunner.run$default(IntegrationRunner.kt:113) at io.airbyte.cdk.integrations.base.IntegrationRunner.run(IntegrationRunner.kt) at io.airbyte.integrations.source.mongodb.MongoDbSource.main(MongoDbSource.java:53) 2025-05-08 124351 info INFO main i.a.c.i.b.IntegrationRunner(runInternal):224 Completed integration: io.airbyte.integrations.source.mongodb.MongoDbSource 2025-05-08 124351 info INFO main i.a.i.s.m.MongoDbSource(main):54 completed source: class io.airbyte.integrations.source.mongodb.MongoDbSource 2025-05-08 124351 info Checking for optional control message... 2025-05-08 124351 info Optional control message not found. Skipping... 2025-05-08 124351 info Writing output of b2e713cd-cc36-4c0a-b5bd-b47cb8a0561e_caf84f8f-3c0c-4d15-9d01-7a4c3d13b470_0_check to the doc store 2025-05-08 124351 info Marking workload b2e713cd-cc36-4c0a-b5bd-b47cb8a0561e_caf84f8f-3c0c-4d15-9d01-7a4c3d13b470_0_check as successful 2025-05-08 124351 info 2025-05-08 124351 info Deliberately exiting process with code 0. 2025-05-08 124351 info ----- END CHECK ----- 2025-05-08 124351 info
    p
    • 2
    • 3
  • a

    Abhishek Kumar

    05/08/2025, 11:02 AM
    Hi Everyone, Can i get some working template code for creating Java CDK connector source (where source fetches data from HTTP endpoints) ?
    p
    • 2
    • 1
1...241242243244245Latest