https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • h

    Hari Haran R

    09/11/2025, 8:10 AM
    • @kapa.ai we are using airbyte version 0.44.5 i need to sync schema and fields basedon field selection, how to achieve the field selction using the payload
    k
    • 2
    • 1
  • d

    DR

    09/11/2025, 9:11 AM
    @kapa.ai Airbyte Sync Failure – AppStoreConnect (Custom → Snowflake) Since yesterday, the sync failed with:
    Some streams either received an INCOMPLETE stream status, or did not receive a stream status at all: null.subscription_event_report, null.subscription_report, null.sales_report, null.subscriber_report
    k
    • 2
    • 4
  • s

    Syed Hamza Raza Kazmi

    09/11/2025, 9:39 AM
    @kapa.ai is airbyte to airbyte replication issue regarding putting nulls in incremental load, is it resolved ?
    k
    • 2
    • 1
  • y

    Youssef HAMROUNI

    09/11/2025, 9:40 AM
    Plan: 3 to add, 0 to change, 0 to destroy. kubernetes_namespace.airbyte: Creating... kubernetes_namespace.airbyte: Creation complete after 0s [id=airbyte] helm_release.ebs_csi_driver: Creating... helm_release.airbyte: Creating... ╷ │ Error: could not download chart: no cached repo found. (try 'helm repo update'): open C\Users\YOUSSE~1.HAM\AppData\Local\Temp\helm\repository\airbyte index.yaml The system cannot find the file specified. │ │ with helm_release.airbyte, │ on airbyte.tf line 7, in resource "helm_release" "airbyte": │ 7: resource "helm_release" "airbyte" { │ ╵ ╷ │ Error: could not download chart: chart "aws-ebs-csi-driver" version "2.36.1" not found in https://kubernetes-sigs.github.io/aws-ebs-csi-driver repository │ │ with helm_release.ebs_csi_driver, │ on ebs-csi-helm.tf line 1, in resource "helm_release" "ebs_csi_driver": │ 1: resource "helm_release" "ebs_csi_driver" { │ ╵ je suis entrain de deployer airbyte sur eks avec terraform comment resoudre ce probleme ? voici le fichier airbyte.tf : resource "kubernetes_namespace" "airbyte" { metadata { name = "airbyte" } } resource "helm_release" "airbyte" { name = "airbyte" namespace = kubernetes_namespace.airbyte.metadata[0].name repository = "https://airbytehq.github.io/helm-charts" chart = "airbyte" version = "1.7.2" create_namespace = true values = [file("${path.module}/airbyte-values.yaml")] }
    k
    • 2
    • 1
  • a

    Anthony Gerke

    09/11/2025, 11:40 AM
    @kapa.ai I am trying to re-install 1.7.3 using abctl and hitting the following error. $ abctl local install --chart-version=1.7.3 INFO Using Kubernetes provider: Provider: kind Kubeconfig: /home/user/.airbyte/abctl/abctl.kubeconfig Context: kind-airbyte-abctl SUCCESS Found Docker installation: version 26.0.2 SUCCESS Existing cluster ‘airbyte-abctl’ found SUCCESS Cluster ‘airbyte-abctl’ validation complete WARNING Found MinIO physical volume. Consider migrating it to local storage (see project docs) ERROR failed to determine if any previous psql version exists: error reading pgdata version file: open /home/user/.airbyte/abctl/data/airbyte-volume-db/pgdata/PG_VERSION: permission denied
    k
    • 2
    • 1
  • b

    Branko Djukic

    09/11/2025, 11:57 AM
    @kapa.ai for automation of log rotation and management for minio, the only option is S3?
    k
    • 2
    • 4
  • j

    Júlia Lemes

    09/11/2025, 12:38 PM
    @kapa.ai Where does Airbyte stores the logs for the connection syncs? Is it in the internal database? And in which table?
    k
    • 2
    • 7
  • r

    Rahul

    09/11/2025, 12:41 PM
    @kapa.ai How I can add S3 as a source in Airbyte OSS.
    k
    • 2
    • 1
  • j

    Júlia Lemes

    09/11/2025, 12:54 PM
    @kapa.ai I'm using Airbyte version 1.7.1 and Postgres connector version 3.7.0. All other connections from other sources work correctly, but when I try to run the connection with the Postgres source, it keeps giving me an error on insufficient resources (CPU).
    k
    • 2
    • 10
  • k

    Kamal Tuteja

    09/11/2025, 1:40 PM
    @kapa.ai Internal message: Init container error encountered while processing workload for id: cf9c4355-b171-4477-8f2d-6c5cc5fc8b7e_38343bc3-96a4-401b-ba7e-55de79290e75_0_check. Encountered exception of type: class java.lang.NullPointerException. Exception message: Cannot invoke "java.util.UUID.toString()" because the return value of "io.airbyte.persistence.job.models.IntegrationLauncherConfig.getConnectionId()" is null. Failure origin: airbyte_platform. Quickbook connection
    k
    • 2
    • 2
  • l

    Lisha Zhang

    09/11/2025, 1:56 PM
    @kapa.ai I want to ingest Twitter Ads data. There is no current implementation, so I am looking to build my own connector. I am wondering if it will be possible for me to use the Airbyte UI custom builder? Will the auth method and the ads streams work out? If not what option do I have, can I use the low code (yaml file only) option?
    k
    • 2
    • 4
  • s

    Slackbot

    09/11/2025, 1:57 PM
    This message was deleted.
    k
    h
    • 3
    • 10
  • a

    Affan Zafar

    09/11/2025, 2:13 PM
    Is there any option in airbyte OSS to pause the load if it exceeds 2 hours?
    k
    • 2
    • 1
  • m

    Mohamed Akram Lahdir

    09/11/2025, 2:19 PM
    @kapa.ai how to migrate to chart v2 with abctl
    k
    • 2
    • 16
  • a

    Andrey

    09/11/2025, 3:48 PM
    @kapa.ai I’m receiving error 500 (Get Spec job failed.) when trying to create new connector from docker image in my OSS instance deployed to google cloud VM using abctl. Docker image lives in GC artifact registry and I verified that VM has access to it. Any ideas why there is an error?
    k
    • 2
    • 22
  • l

    Lisandro Maselli

    09/11/2025, 3:57 PM
    @kapa.ai do you have any information about running the pg connector with a cluster of postgresql deployed with cloudnative pg?
    k
    • 2
    • 1
  • a

    Armelle Brincourt

    09/11/2025, 4:17 PM
    @kapa.ai since upgrading postgress connector to 3.7.0 I have issues with the CDC
    k
    • 2
    • 22
  • j

    Júlia Lemes

    09/11/2025, 4:50 PM
    @kapa.ai What is this error I'm receiving from the server when trying to run my Postgres sync? ERROR i.a.c.s.e.h.IdNotFoundExceptionHandler(handle):33 - Not f ││ id: null ││ message: Id not found: Could not find attempt stats for job_id: 4 and attempt no: 0 ││ exceptionClassName: io.airbyte.commons.server.errors.IdNotFoundKnownException ││ exceptionStack: [io.airbyte.commons.server.errors.IdNotFoundKnownException: Id not found: Could not find attempt ││ rootCauseExceptionClassName: java.lang.Class ││ rootCauseExceptionStack: [io.airbyte.commons.server.errors.IdNotFoundKnownException: Could not find attempt stat
    k
    • 2
    • 1
  • j

    Júlia Lemes

    09/11/2025, 4:51 PM
    @kapa.ai What is the sync_stats table in Airbyte's database?
    k
    • 2
    • 1
  • g

    Grant Manning

    09/11/2025, 4:51 PM
    @kapa.ai On a hubspot contact stream sync, I get the following warns in the log. Even though these fields are disabled in the stream.
    Copy code
    2025-09-11 04:47:26 source WARN Couldn't parse date/datetime string in hs_lifecyclestage_lead_date, trying to parse timestamp... Field value: 1756997051024. Ex: Unable to parse string [1756997051024]
    2025-09-11 04:47:26 source WARN Couldn't parse date/datetime string in hs_lifecyclestage_marketingqualifiedlead_date, trying to parse timestamp... Field value: 1756876269194. Ex: Unable to parse string [1756876269194]
    2025-09-11 04:47:26 source WARN Couldn't parse date/datetime string in hs_lifecyclestage_lead_date, trying to parse timestamp... Field value: 1756997051024. Ex: Unable to parse string [1756997051024]
    k
    • 2
    • 1
  • j

    Júlia Lemes

    09/11/2025, 4:53 PM
    @kapa.ai I'm having an issue with my Postgres source after migrating the database from Airbyte internal database to RDS. The other pipelines work just fine and previously from the migration this pipeline was also working normally, but now it's throwing insufficient resources error and doesn't start the sync.
    k
    • 2
    • 1
  • j

    Júlia Lemes

    09/11/2025, 4:56 PM
    @kapa.ai What is the airbyte-local-pv inside Airbyte's folder?
    k
    • 2
    • 1
  • f

    Faisal

    09/11/2025, 6:45 PM
    @kapa.ai getting following message when connecting to a SQL Server source: Failed to save ATRIO_ROOT due to the following error: State code: 08S01; Message: The TCP/IP connection to the host 127.0.0.1, port 45899 has failed. Error: "The driver received an unexpected pre-login response. Verify the connection properties and check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. This driver can be used only with SQL Server 2005 or later.". ClientConnectionId:7acac017-e275-43c6-b634-007100e54025
    k
    • 2
    • 4
  • m

    Michael Gallivan

    09/11/2025, 6:48 PM
    @kapa.ai In the Jira source, is there a way to only refresh one project? or do I need to refresh the whole source?
    k
    • 2
    • 1
  • m

    Meghana

    09/12/2025, 12:55 AM
    @kapa.ai sharepoint connector are we using all of the configured connections in Airbyte? The secret management cost associated with the application is approaching $1,000/month so if there are any connections that are no longer needed or were for testing we should make an effort to clean those up. Why doed it depend on number of connections
    k
    • 2
    • 4
  • l

    Liew Tze Hao Timothy

    09/12/2025, 2:29 AM
    @kapa.ai Hi, I am using a custom postgres connector that pulls the source-postgres image (v1.0.42). I encountered the following error Configuration check failed Message: HikariPool-1 - Connection is not available, request timed out after 10001ms. Internal message: io.airbyte.commons.exceptions.ConnectionErrorException: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 10001ms. Failure origin: source Failure type: config_error How can I trouble/debug this?
    k
    • 2
    • 1
  • i

    Iulian

    09/12/2025, 6:58 AM
    @kapa.ai I'm using connector MongoDb v 2.0.2 to stync data from one Mongodb to another. I am doing incremental syncs on the collections from the db. At the first sync it synced all existing data, but afterwards no new data was synced. The syncs show 0 records loaded, no error logs. What could be the issue
    k
    • 2
    • 20
  • b

    Branko Djukic

    09/12/2025, 7:23 AM
    @kapa.ai When trying to change log level from info to error via airbyte helm chart version 1.8.1 With: extraEnv: - name: LOG_LEVEL value: "ERROR"
    Copy code
    error when patching "/dev/shm/647891501": creating patch with: original: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"<http://app.kubernetes.io/instance|app.kubernetes.io/instance>":"airbyte","<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>":"Helm","<http://app.kubernetes.io/name|app.kubernetes.io/name>":"workload-launcher","<http://app.kubernetes.io/version|app.kubernetes.io/version>":"1.8.1","<http://argocd.argoproj.io/instance|argocd.argoproj.io/instance>":"airbyte-application","<http://helm.sh/chart|helm.sh/chart>":"workload-launcher-1.8.1"},"name":"airbyte-workload-launcher","namespace":"airbyte"},"spec":{"replicas":2,"selector":{"matchLabels":{"<http://app.kubernetes.io/instance|app.kubernetes.io/instance>":"airbyte","<http://app.kubernetes.io/name|app.kubernetes.io/name>":"workload-launcher"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"<http://app.kubernetes.io/instance|app.kubernetes.io/instance>":"airbyte","<http://app.kubernetes.io/name|app.kubernetes.io/name>":"workload-launcher"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"<http://app.kubernetes.io/name|app.kubernetes.io/name>","operator":"In","values":["airbyte-workload-launcher"]}]},"topologyKey":"<http://kubernetes.io/hostname|kubernetes.io/hostname>"},"weight":100}]}},"automountServiceAccountToken":true,"containers":[{"env":[{"name":"RUNNING_TTL_MINUTES","value":""},{"name":"SUCCEEDED_TTL_MINUTES","value":"10"},{"name":"UN
    on airbyte workload launcher
    If i use: log: level: "ERROR" Nothing changes and log leves is not changed. Suggestions?
    k
    • 2
    • 4
  • e

    Euan Blackledge

    09/12/2025, 8:17 AM
    Hey @kapa.ai, can you take a look at these logs? The error in the front end is:
    Internal message: Unexpected error performing DISCOVER. The exit of the connector was: 0 Failure origin: source
    .
    Copy code
    2025-09-12 08:11:43,050 [default-7]	ERROR	i.a.w.l.p.h.FailureHandler(handleStageError):66 - Stage Pipeline Exception: io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workers.exception.KubeClientException: Failed to delete pods for mutex key: cd2ac046-196d-432e-8ff1-6e423ee9f406.
    message: io.airbyte.workers.exception.KubeClientException: Failed to delete pods for mutex key: cd2ac046-196d-432e-8ff1-6e423ee9f406.
    stackTrace: [Ljava.lang.StackTraceElement;@613de9ad
    io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workers.exception.KubeClientException: Failed to delete pods for mutex key: cd2ac046-196d-432e-8ff1-6e423ee9f406.
    	at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:49)
    	at io.airbyte.workload.launcher.pipeline.stages.EnforceMutexStage.apply(EnforceMutexStage.kt:42)
    	at io.airbyte.workload.launcher.pipeline.stages.$EnforceMutexStage$Definition$Intercepted.$$access$$apply(Unknown Source)
    	at io.airbyte.workload.launcher.pipeline.stages.$EnforceMutexStage$Definition$Exec.dispatch(Unknown Source)
    	at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456)
    	at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:134)
    	at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:65)
    	at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:48)
    	at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:143)
    	at io.airbyte.workload.launcher.pipeline.stages.$EnforceMutexStage$Definition$Intercepted.apply(Unknown Source)
    	at io.airbyte.workload.launcher.pipeline.stages.EnforceMutexStage.apply(EnforceMutexStage.kt:30)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367)
    	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193)
    	at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:4560)
    	at reactor.core.publisher.FluxFlatMap$FlatMapMain.onNext(FluxFlatMap.java:430)
    	at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.runAsync(FluxPublishOn.java:446)
    	at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.run(FluxPublishOn.java:533)
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)
    	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
    	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
    	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
    	at java.base/java.lang.Thread.run(Thread.java:1583)
    Caused by: io.airbyte.workers.exception.KubeClientException: Failed to delete pods for mutex key: cd2ac046-196d-432e-8ff1-6e423ee9f406.
    	at io.airbyte.workload.launcher.pods.KubePodClient.deleteMutexPods(KubePodClient.kt:301)
    	at io.airbyte.workload.launcher.pipeline.stages.EnforceMutexStage.applyStage(EnforceMutexStage.kt:55)
    	at io.airbyte.workload.launcher.pipeline.stages.EnforceMutexStage.applyStage(EnforceMutexStage.kt:30)
    	at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:45)
    	... 40 common frames omitted
    Caused by: io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [45000] milliseconds for [Pod] with name:[null] in namespace [airbyte].
    	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:946)
    	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:98)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.deleteActivePods$lambda$15(KubePodLauncher.kt:245)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher$runKubeCommand$1.get(KubePodLauncher.kt:296)
    	at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243)
    	at dev.failsafe.Functions.lambda$get$0(Functions.java:46)
    	at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74)
    	at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187)
    	at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376)
    	at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand(KubePodLauncher.kt:294)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.deleteActivePods(KubePodLauncher.kt:225)
    	at io.airbyte.workload.launcher.pods.KubePodClient.deleteMutexPods(KubePodClient.kt:297)
    	... 43 common frames omitted
    k
    • 2
    • 1
  • d

    DR

    09/12/2025, 8:21 AM
    @kapa.ai I’m building a declarative connector in Builder for the App Store Connect Sales Reports API. The request works and I get a response, but instead of JSON it comes back as a gzip file with headers like:
    Copy code
    "headers": {
      "Content-Type": "application/a-gzip",
      "Content-Length": "59717",
      "Server": "daiquiri/5",
      "X-Rate-Limit": "user-hour-lim:3600;user-hour-rem:3598;"
    }
    The payload is a gzip-compressed TSV (tab-delimited) file that contains the report rows. ❓ How can I configure Builder to handle this type of response?
    k
    • 2
    • 27
1...4445464748Latest