Abe Azam
05/05/2025, 8:34 AMAntoine Roncoroni
05/05/2025, 9:39 AMTerminating due to java.lang.OutOfMemoryError: Java heap space
error and runs endlessly in the UI in Incremental mode
I see that the airbyte_internal
table does have data, but not the final table
I tried tuning CPU/memory limits/requests at the connection-level, which didn’t work
Any idea how I can fix this ?Yasser Osama
05/05/2025, 10:05 AMFailure in source: Incumbent CDC state is invalid, reason: Saved offset no longer present on the server, please reset the connection, and then increase binlog retention and/or increase sync frequency. Connector last known binlog file mysql-bin-changelog.001213 is not found in the server. Server has [mysql-bin-changelog.001406, mysql-bin-changelog.001407, mysql-bin-changelog.001408].
Emmanuel Emussu
05/05/2025, 10:32 AMAtharva Pandit (AP)
05/05/2025, 5:07 PMSree Shanthan Kuthuru
05/06/2025, 3:46 AMSorry, something went wrong.
Minified React error #185; visit <https://reactjs.org/docs/error-decoder.html?invariant=185> for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
// Error
{}
Error: Minified React error #185; visit
<https://reactjs.org/docs/error-decoder.html?invariant=185> for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
at vs (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:33862>)
at wse (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:39:24436>)
at Object.<anonymous> (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:291:75880>)
at Object.registerOption (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:284:13902>)
at <http://10.192.20.8:8000/assets/core-lwvfr41579.js:291:82640>
at hx (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:24296>)
at FD (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31641>)
at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31497>)
at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31405>)
at HH (<http://10.192.20.8:8000/assets/core-lwvfr41579.js:41:31405>)
Darko Macoritto
05/06/2025, 7:35 AM2025-05-05 20:22:35 destination INFO pool-3-thread-1 i.a.c.i.d.a.b.BufferManager(printQueueInfo):94 [ASYNC QUEUE INFO] Global: max: 1.5 GB, allocated: 10 MB (9.99921989440918 MB), %% used: 0.006509908785422643 | Queue `profiles_buyer_properties`, num records: 0, num bytes: 0 bytes, allocated bytes: 0 bytes | State Manager memory usage: Allocated: 9 MB, Used: -818 bytes, percentage Used -7.801664520414133E-5
2025-05-05 20:22:35 destination INFO pool-6-thread-1 i.a.c.i.d.a.FlushWorkers(printWorkerInfo):129 [ASYNC WORKER INFO] Pool queue size: 0, Active threads: 0
2025-05-05 20:38:38 replication-orchestrator INFO thread status... heartbeat thread: true , replication thread: false
2025-05-05 20:38:38 replication-orchestrator INFO Do not terminate as feature flag is disable
I came back at 8 am this morning, the sync was still there. Importantly, Airbyte is self-hosted and runs on a machine with the following infrastructure: 8 GB RAM / 4 CPU. This is a good machine according to the Airbyte official documentation.
So here are my questions:
1. Do you see any solution which could make this work without upgrading my machine ?
2. If 1. does not work, maybe I could sync the data using another solution (such as a python script) and just add the new lines through airbyte (I need a Incremental | Append + Deduped). This would reduce dramatically the amount of data to deal with per snyc. Has anyone tried this ? Is this possible ?
3. Any other idea is welcome
Please let me know if you want to look at the full logs
Thanks for your help.Azfer Pervaiz
05/06/2025, 8:38 AMCurrencyType
Standard/Setup object of Salesforce,
The connection was built successfully, but the data is not syncing, even though it creates airbyte internal table and normalised table in the destination correctly.
Has anyone experience this issue?Antoine Roncoroni
05/06/2025, 10:02 AMNamratha D
05/06/2025, 10:06 AMAlberto
05/06/2025, 11:34 AM{
"message": "Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379",
"exceptionClassName": "io.lettuce.core.RedisConnectionException",
"exceptionStack": [],
"rootCauseExceptionStack": []
}
Michael l
05/06/2025, 1:57 PMDROP TABLE IF EXISTS `...`.`hubspot_airbyte`.`engagements_emails_ab_soft_reset`;
CREATE OR REPLACE TABLE `...`.`hubspot_airbyte`.`engagements_emails_ab_soft_reset` (
_airbyte_raw_id STRING NOT NULL,
....<snip>...
UPDATE `...`.`airbyte_internal`.`hubspot_airbyte_raw__stream_engagements_emails` SET _airbyte_loaded_at = NULL WHERE 1=1;
The table size is:
• Total physical bytes 6.02 GB
• Active physical bytes 6.02 GB
However the analysis causes a 4.6tb analysis in Bigquery.
1. After the DROP then CREATE why does Airbyte perform this update?
2. Is there anything we can do to avoid this?Théo
05/06/2025, 3:52 PM2025-05-06 15:23:10,396 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - io.airbyte.cdk.ConfigErrorException: Failed to initialize connector operation
2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.cdk.AirbyteConnectorRunnable.run(AirbyteConnectorRunnable.kt:31)
2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.executeUserObject(CommandLine.java:2030)
2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.access$1500(CommandLine.java:148)
2025-05-06 15:23:10,397 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2465)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.handle(CommandLine.java:2457)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.handle(CommandLine.java:2419)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2277)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine$RunLast.execute(CommandLine.java:2421)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at picocli.CommandLine.execute(CommandLine.java:2174)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.cdk.AirbyteDestinationRunner$Companion.run(AirbyteConnectorRunner.kt:286)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.integrations.destination.s3_v2.S3V2Destination$Companion.main(S3V2Destination.kt:16)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - at io.airbyte.integrations.destination.s3_v2.S3V2Destination.main(S3V2Destination.kt)
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - Caused by: io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.cdk.load.task.DefaultDestinationTaskLauncher]
2025-05-06 15:23:10,398 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 -
2025-05-06 15:23:10,399 [pool-15-thread-1] ERROR i.a.w.i.LocalContainerAirbyteDestination(start$lambda$2):77 - Path Taken: new WriteOperation(TaskLauncher taskLauncher,SyncManager syncManager) --> new WriteOperation([TaskLauncher taskLauncher],SyncManager syncManager) --> new DefaultDestinationTaskLauncher(TaskScopeProvider taskScopeProvider,[DestinationCatalog catalog],DestinationConfiguration config,SyncManager syncManager,InputConsumerTaskFactory inputConsumerTaskFactory,SpillToDiskTaskFactory spillToDiskTaskFactory,FlushTickTask flushTickTask,SetupTaskFactory setupTaskFactory,OpenStreamTaskFactory openStreamTaskFactory,ProcessRecordsTaskFactory processRecordsTaskFactory,ProcessFileTaskFactory processFileTaskFactory,ProcessBatchTaskFactory processBatchTaskFactory,CloseStreamTaskFactory closeStreamTaskFactory,TeardownTaskFactory teardownTaskFactory,FlushCheckpointsTaskFactory flushCheckpointsTaskFactory,UpdateCheckpointsTask updateCheckpointsTask,FailStreamTaskFactory failStreamTaskFactory,FailSyncTaskFactory failSyncTaskFactory,boolean fileTransferEnabled,ReservingDeserializingInputFlow inputFlow,MessageQueueSupplier<Descriptor K, Reserved<DestinationStreamEvent T> T> recordQueueSupplier,QueueWriter<Reserved<CheckpointMessageWrapped T> T> checkpointQueue,MessageQueue<FileTransferQueueMessage T> fileTransferQueue,MessageQueue<DestinationStream T> openStreamQueue)
It seems that my destination connector is failing for whatever reason.
``````Arpit Nath
05/07/2025, 5:22 AM/v1/sources/initiateOAuth
API, we’re consistently hitting the following error:
"message": "Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379",
"exceptionClassName": "io.lettuce.core.RedisConnectionException"
• We’ve verified our payload, redirect URI, and network conditions.
• This seems like an internal Redis issue from Airbyte Cloud’s side, not a user configuration problem.
following the documented “Use Airbyte credentials to authenticate” OAuth flow. However, across multiple source types : including Google Ads, Facebook Ads or Airtable , every call to initiateOAuth returns a 500 Internal Server Error with this message:
This has been blocking us from progressing any further with integration. It’s been several days but haven’t received a resolution yet.
Can someone please help investigate this ASAP or provide a workaround?
Continuation thread: https://airbytehq.slack.com/archives/C01AHCD885S/p1746262156456959
Thank you.Pavan Kalyan Chitturi
05/07/2025, 6:41 AMDheeraj Soni
05/07/2025, 8:40 AM400 Bad Request
error.
I've verified the base URL, URL path, and authentication credentials — all are correct. I'm able to successfully fetch the response using Postman, but it fails during testing in Airbyte.
Interestingly, with the same base URL, when I use the URL path /api/v2/incremental/tickets/cursor
, it works. However, it fails with /api/v2/ticket_audits.json
.
The stack trace of the error is included in the snippet below.Piotr Strugacz
05/07/2025, 12:19 PMairbyte/destination-redshift 3.5.3.
connector.
The sync process seems to failing due to heartbeat functionality timing out for Redshift destination. Redshift is configured to run on ra3.large
nodes if that matters. Greatly appreciate if anyone has idea how to either disable heartbeat for this connector or change it timeout settings in cloud env.
The error.
INFO Failures: [ {
"failureOrigin" : "airbyte_platform",
"internalMessage" : "Workload Heartbeat Error",
"externalMessage" : "Workload Heartbeat Error",
"metadata" : {
"attemptNumber" : 0,
"jobId" : 35754981
},
"stacktrace" : "io.airbyte.workers.exception.WorkloadHeartbeatException: Workload Heartbeat Error\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.checkIfExpiredAndMarkSyncStateAsFailed(WorkloadHeartbeatSender.kt:117)\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.sendHeartbeat(WorkloadHeartbeatSender.kt:55)\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender$sendHeartbeat$1.invokeSuspend(WorkloadHeartbeatSender.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:586)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:829)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:717)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:704)\nCaused by: io.airbyte.container.orchestrator.worker.io.DestinationTimeoutMonitor$TimeoutException: Last action 5 minutes 35 seconds ago, exceeding the threshold of 3 minutes.\n\tat io.airbyte.container.orchestrator.worker.WorkloadHeartbeatSender.sendHeartbeat(WorkloadHeartbeatSender.kt:58)\n\t... 7 more\n",
"timestamp" : 1746612535019
}, {
"failureOrigin" : "replication",
"internalMessage" : "No exit code found.",
"externalMessage" : "Something went wrong during replication",
"metadata" : {
"attemptNumber" : 0,
"jobId" : 35754981
},
"stacktrace" : "java.lang.IllegalStateException: No exit code found.\n\tat io.airbyte.container.orchestrator.worker.io.ContainerIOHandle.getExitCode(ContainerIOHandle.kt:101)\n\tat io.airbyte.container.orchestrator.worker.io.LocalContainerAirbyteSource.getExitValue(LocalContainerAirbyteSource.kt:89)\n\tat io.airbyte.container.orchestrator.worker.DestinationWriter.run(DestinationWriter.kt:35)\n\tat io.airbyte.container.orchestrator.worker.DestinationWriter$run$1.invokeSuspend(DestinationWriter.kt)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:100)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n",
"timestamp" : 1746612699270
} ]
Sabrina Lanzotti
05/07/2025, 12:49 PM2025-05-07 09:43:46 info
2025-05-07 09:43:46 info ----- START CHECK -----
2025-05-07 09:43:46 info
2025-05-07 09:43:48 info Connector exited, processing output
2025-05-07 09:43:48 info Output file jobOutput.json found
2025-05-07 09:43:48 info Connector exited with exit code 0
2025-05-07 09:43:48 info Reading messages from protocol version 0.2.0
2025-05-07 09:43:48 error Check failed
2025-05-07 09:43:48 info Checking for optional control message...
2025-05-07 09:43:48 info Optional control message not found. Skipping...
2025-05-07 09:43:48 info Writing output of 69589781-7828-43c5-9f63-8925b1c1ccc2_f681199d-63e9-41c2-95d7-1bd15173b340_0_check to the doc store
2025-05-07 09:43:48 info Marking workload 69589781-7828-43c5-9f63-8925b1c1ccc2_f681199d-63e9-41c2-95d7-1bd15173b340_0_check as successful
2025-05-07 09:43:49 info
2025-05-07 09:43:49 info Deliberately exiting process with code 0.
2025-05-07 09:43:49 info ----- END CHECK -----
2025-05-07 09:43:49 info
I've already checked and the files are accesible without any credentials... is there a particular way to set this up?Abhishek Kumar
05/07/2025, 1:33 PMFabian Boerner
05/07/2025, 2:13 PMDurim Gashi
05/07/2025, 2:17 PMMoe Hein Aung
05/07/2025, 2:44 PMWorkload failed, source: workload-monitor-heartbeat Airbyte could not track the sync progress. Sync process exited without reporting status
I did research online this seems to be because the storage for MinIO logs ran out of space. As suggested on this github issue, I set about deleting and re-creating this MinIO pod and PVs attached to it as a fix.
I started by running the following:
kubectl get statefulset -n airbyte
kubectl get pvc -n airbyte
kubectl get pv -n airbyte
Then deleted them all successfully:
kubectl delete statefulset airbyte-minio -n airbyte
kubectl delete pvc airbyte-minio-pv-claim-airbyte-minio-0 -n airbyte
kubectl delete pv pvc-26a02143-f688-4674-a49f-1335e8c74cca
In the process I also did helm repo update to update from 1.5.1 to 1.6.1 then tried helm upgrade:
helm upgrade --install airbyte airbyte/airbyte --version 1.6.1 -n airbyte -f values.yaml --debug
However, it gets stuck with Pod airbyte-minio-create-bucket running
and minio pod does not get created. This is part of my helm chart:
# for v1.6.1
logs:
storage:
type: minio
minio:
enabled: true
persistence:
enabled: true
size: 20Gi
storageClass: gp2
# old for v1.5.1 commented out
# minio:
# persistence:
# enabled: true
# size: 20Gi
# storageClass: "gp2"
Can anyone please help on how I can re-create my minio pod? The web server is still up and running but I cannot run any sync jobs until this is fixed.Owen
05/07/2025, 3:07 PM{
success: false,
error: 'HTTP request failed with status code 500: {"message":"Internal Server Error: Unable to connect to ab-redis-master.ab.svc.cluster.local/<unresolved>:6379","exceptionClassName":"io.lettuce.core.RedisConnectionException","exceptionStack":[],"rootCauseExceptionStack":[]}'
}
This occurs when hitting the endpoint to initiate oAuth for any sourceLeandro Ricardo Bologna Ferreira
05/07/2025, 6:10 PMbadri
05/07/2025, 9:19 PMdataplane
secrets using airbyte-auth-secrets ?
airbyte bootloader
is replacing the dataplane credentials .. how to get over it ?Omar García Ortíz
05/07/2025, 9:51 PMMichael Johnsey
05/07/2025, 10:17 PMValueError: No format in ['%ms', '%ms'] matching
on a timestamp field but I can't figure out if it is stream specific and it's a stream we can deactivate to get it flowing again or if it's global to all the Hubspot streams (wouldn't expect that). I'll put the full error message in the threadHakim Saifee
05/08/2025, 9:32 AMkanzari soumaya
05/08/2025, 10:49 AMAbhishek Kumar
05/08/2025, 11:02 AM