Mike Braden
11/10/2025, 6:22 PMSimon Schmitke
11/10/2025, 6:57 PMresources:
limits:
cpu: 500m
memory: 900Mi
requests:
cpu: 300m
memory: 600Mi
I'm trying to sync multiple tables with multi billion rows. Are these resource sizes too small?Mike Braden
11/10/2025, 7:18 PMHarsh Kumar
11/10/2025, 7:49 PMZack Roberts
11/10/2025, 8:45 PMSlackbot
11/10/2025, 8:45 PMLucas Segers
11/10/2025, 8:59 PMio.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: namespace rate limit exceeded
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:351)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:332)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:174)
at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.listClosedWorkflowExecutions(WorkflowServiceGrpc.java:5903)
at io.airbyte.commons.temporal.WorkflowServiceStubsWrapped.blockingStubListClosedWorkflowExecutions$lambda$0(WorkflowServiceStubsWrapped.kt:38)
at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243)
at dev.failsafe.Functions.lambda$get$0(Functions.java:46)
at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74)
at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187)
at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376)
at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112)
at io.airbyte.commons.temporal.RetryHelper.withRetries(RetryHelper.kt:57)
at io.airbyte.commons.temporal.WorkflowServiceStubsWrapped.withRetries(WorkflowServiceStubsWrapped.kt:63)
at io.airbyte.commons.temporal.WorkflowServiceStubsWrapped.blockingStubListClosedWorkflowExecutions(WorkflowServiceStubsWrapped.kt:37)
at io.airbyte.commons.temporal.TemporalClient.fetchClosedWorkflowsByStatus(TemporalClient.kt:137)
at io.airbyte.commons.temporal.TemporalClient.restartClosedWorkflowByStatus(TemporalClient.kt:113)
at io.airbyte.cron.jobs.SelfHealTemporalWorkflows.cleanTemporal(SelfHealTemporalWorkflows.kt:39)
at io.airbyte.cron.jobs.$SelfHealTemporalWorkflows$Definition$Exec.dispatch(Unknown Source)
at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456)
at io.micronaut.inject.DelegatingExecutableMethod.invoke(DelegatingExecutableMethod.java:86)
at io.micronaut.context.bind.DefaultExecutableBeanContextBinder$ContextBoundExecutable.invoke(DefaultExecutableBeanContextBinder.java:152)
at io.micronaut.scheduling.processor.ScheduledMethodProcessor.lambda$scheduleTask$2(ScheduledMethodProcessor.java:160)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305…
Everything seems to be kind of fine, but is there any way to fiddle with temporal params?
Anyone noticed the same after upgrading to v2?
Have some frequent syncs in here (maybe 10 simultaneous max)Fabrizio Spini
11/11/2025, 9:12 AMairbyte-values.yaml)
2. `postgresql.primary.persistence.size`Direct access key used by Bitnami PostgreSQL sub-charts.
3. `postgresql.volumeClaimTemplates[0].spec.resources.requests.storage`*Direct Kubernetes template path* (Discovered in live GKE YAML).
setting those keys in the following section
resource "helm_release" "airbyte" {
name = "airbyte-v2"
repository = "<https://airbytehq.github.io/charts>"
chart = "airbyte"
namespace = "airbyte"
create_namespace = true
set {
name = "<<Keys reported above>>"
value = "50Gi"
}
But all failed to set 50G of disk space for internal postgres and this is leading to a "no disk space left" on postgres soon.
I have tried to read the documentation but I haven't found any evidence of which key I have to use.
Does anyone had success to set the storage for the internal DB?Victor K.
11/11/2025, 1:43 PMRob Kwark
11/11/2025, 4:53 PMMatt Monahan
11/11/2025, 5:26 PMCould not connect with provided configuration. Error: Failed to insert expected rows into check table. Actual written: 0
Our server is behind a load balancer (traefik) so its exposed on port 443 but i specified that to no avail 🤔Rob Kwark
11/11/2025, 7:00 PM2025-11-11 09:36:00 platform INFO [source] image: airbyte/source-snowflake:1.0.8 resources: ResourceRequirements(claims=[], limits={memory=2Gi, cpu=2}, requests={memory=1Gi, cpu=2}, additionalProperties={})
and basically the job fails with PIPE BROKEN error (Java OOMKILLED) because it can't load the entire table into RAM memory.
When I set the source connection to have higher limits, I notice that it has to pull in the ENTIRE table into memory, instead of in chunks
This seems like a big issue - does this mean that the worker resource needs to scale with table size?Mike Braden
11/11/2025, 7:08 PM{{ stream_interval['start_time'] }} and {{ stream_interval['end_time'] }} variables are getting set correctly (seen in logs), it doesn't appear to actually be inserting these into the query parameters and every sync always pulls everything. If I have both incremental sync enabled and configured with query parameter injection AND configure additional query parameters under Request Options then it does appear to work:Pranay S
11/12/2025, 7:19 AM<https://api.airbyte.com/v1/jobs>
now the sync has started but i also want some loading or atleast an estimate timer, or anything of that sort to keep the ui dynamic.
is there a way i can do that?
im aware of the endpoint <https://api.airbyte.com/v1/jobs/jobId> which will give me current status of the job, but calling it again and again until it shows completed is abit hectic. is there a more elegant way to deal with it?Komal Kumari
11/12/2025, 12:42 PMbollo
11/12/2025, 3:36 PMingested_at=${YEAR}-${MONTH}-${DAY}-${HOUR}/stream=${STREAM_NAME}/client=${NAMESPACE}/ and then using ingested_at as bookmark to process the data in our pipeline
problem is that airbyte is putting all the data from an ingestion in the same partition no matter if it takes 1 hour or several, so our pipeline drops data
is there a workaround for this? is it a known bug?Mahmoud Khaled
11/12/2025, 4:26 PMaidatum
11/12/2025, 4:51 PMSteve Ma
11/12/2025, 9:47 PMWe detected XMIN transaction wraparound in the database... Looks like it is introduced in this PR https://github.com/airbytehq/airbyte/pull/38836/files. I understand the concern here about XMIN transaction wraparound but could we consider to raise a warning message instead of throwing an error? For my case, I am just planning to sync some regular tables not those really large tablesAkshata Shanbhag
11/13/2025, 7:59 AMSantoshi Kalaskar
11/13/2025, 1:20 PMaidatum
11/13/2025, 2:41 PMValeria Tapia
11/13/2025, 4:25 PMyanndata
11/13/2025, 6:15 PMPranay S
11/14/2025, 6:35 AMAlejandro De La Cruz LĂłpez
11/14/2025, 9:24 AMAlessio Darmanin
11/14/2025, 11:02 AMAirbyte is temporarily unavailable. Please try again. (HTTP 502) when trying to retest a previously working source. From the pods view, the control-plane looks healthy; everything is shown as Running. Going through the server.log I can see the contents shown below, but SSH Tunnel Method is set to "No Tunnel" in the source definition, and the JSON also reflects this.
JSON schema validation failed.
errors: $.tunnel_method: must be the constant value 'SSH_KEY_AUTH',
required property 'tunnel_host' not found,
required property 'tunnel_port' not found,
required property 'tunnel_user' not found,
required property 'ssh_key' not found
Source JSON:
"tunnel_method": { "tunnel_method": "NO_TUNNEL" }
What could the issue be please?Diego Quintana
11/14/2025, 11:34 AMsource connector ClickHouse v0.2.6
destination Postgres v2.2.1
airbyte version 0.50.31 (I know, I know)
The error appears after a bit syncing incremental models, and it seems to be
java.sql.SQLException: java.io.IOException: Premature EOF
A full refresh takes around 1.22H and it does not disconnect though. I've set socket_timeout=300000 in my connection with no success.
What can it be?Ashok Pothireddy
11/14/2025, 11:35 AMRafael Santos
11/14/2025, 12:34 PM