https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • u

    user

    10/29/2024, 9:45 AM
    #47719 Source google-analytics-data-api allow account IDs as well as property IDs New discussion created by csmithhansonwade Hi, I have something like 300 property IDs on Google Analytics 4 spread across 20 accounts. These properties are constantly being added to or removed depending on our Marketing cycles. Fivetran allows you to input an account ID, then it fetches all properties in that account which is easier for us to manage. Thanks! airbytehq/airbyte
  • s

    Salman Jamil

    10/29/2024, 9:53 AM
    Hi Team,
  • s

    Salman Jamil

    10/29/2024, 10:01 AM
    Hi Team, I have been trying to migrate from Airbyte 0.50.6 to Airbyte 1.1.0 (OSS) . Any suggestions on how best to do it, and what steps to follow. I have both Airbytes setup at the moment. The following steps are what I have achieved already: 1- Created Sources by fetching source definition from API and converting it to Terraform config and apply 2- Created destinations like above 3- Created connections like above However, I am seeing few majore differences in actual tables i.e. 1- Airbyte 1 is not creating _scd tables 2- The schema of tables generated by Airbyte 1 is different from Airbyte 0.5. So how do I go about the migration, as I want to incrementally load the data since the point it was synced by older airbyte, but these schema differences mean I cannot run the sync on existing synced tables. Any suggestions on this will be highly appreciated. Thanks
    g
    • 2
    • 7
  • d

    david balli

    10/29/2024, 10:46 AM
    Hi all while importing multiple files (excel files in my case ) using wildcards, is the filename metadata available in the data ?
  • f

    Fizza Abid

    10/29/2024, 11:05 AM
    Hello, does airbyte do CDC for SQL server 2014? And does it tell whether record is I,U, or D?
    • 1
    • 2
  • u

    user

    10/29/2024, 11:23 AM
    #47752 Proxy variables for sidecar container New discussion created by baskakkk I have airbyte 1.1.0 deployed in k8s with helm chart When creating a new connection, a pod with the name "ce-appsflyer-check-..." is raised in which there are two containers (main, connector-sidecar) These two containers do not have access to the Internet and cannot upload data, they need to specify a proxy, but I don’t know how to do this. In the airbyte web interface it shows an error 504, the logs for the "ce-appsflyer-check-..." pod are below: 2024-10-29T110818.914177364Z main INFO Loading mask data from '/seed/specs_secrets_mask.yaml 2024-10-29 110824 INFO i.m.c.e.DefaultEnvironment():168 - Established active environments: [k8s, cloud, worker-v2, control-plane, oss, local-secrets] 2024-10-29 110827 INFO i.a.c.ApplicationKt(main):20 - Context started 2024-10-29 110827 INFO i.a.c.ApplicationKt(main):21 - 6952094558.00 ns/exec (total: 6.95s, 1 executions) 2024-10-29 110828 INFO i.a.c.ApplicationKt(main):28 - Sidecar created 2024-10-29 110828 INFO i.a.c.ApplicationKt(main):29 - 4021543447.00 ns/exec (total: 8.04s, 2 executions) 2024-10-29 110829 INFO i.a.c.i.LineGobbler(voidCall):166 - 2024-10-29T110829.538501209Z pool-5-thread-1 ERROR Recursive call to appender SecretMaskRewrite 2024-10-29 110830 INFO i.a.c.i.LineGobbler(voidCall):166 - ----- START CHECK ----- 2024-10-29 110830 INFO i.a.c.i.LineGobbler(voidCall):166 - 2024-10-29 110830 WARN c.a.l.CommonsLog(warn):113 - JAXB is unavailable. Will fallback to SDK implementation which may be less performant.If you are using Java 9+, you will need to include javax.xml.bind:jaxb-api as a dependency. 2024-10-29 111459 WARN i.a.c.HeartbeatMonitor$HeartbeatTask(handleHeartbeatException):97 - Cancelling job, workload is in a terminal state io.airbyte.workload.api.client.generated.infrastructure.ClientException: Client error : 410 Gone {"message":"Heartbeat a workload in a terminal state"} at io.airbyte.workload.api.client.generated.WorkloadApi.workloadHeartbeat(WorkloadApi.kt:437) ~[io.airbyte.airbyte-api-workload-api-1.1.0.jar:?] at io.airbyte.connectorSidecar.HeartbeatMonitor$HeartbeatTask.run(HeartbeatMonitor.kt:84) ~[io.airbyte-airbyte-connector-sidecar-1.1.0.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?] at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358) ~[?:?] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.base/java.lang.Thread.run(Thread.java:1583) [?:?] 2024-10-29 111459 WARN i.a.c.ConnectorWatcher(waitForConnectorOutput):96 - Heartbeat indicates that the workload is in a terminal state, exiting process 2024-10-29 111459 INFO i.a.c.ConnectorWatcher(exitInternalError):221 - Deliberately exiting process with code 1. 2024-10-29 111459 WARN c.v.l.l.Log4j2Appender(close):108 - Already shutting down. Cannot remove shutdown hook. I specify the variables HTTP_PROXY, HTTPS_PROXY, NO_PROXY, http_proxy, https_proxy, no_proxy in .Values.global.env_vars, however they are not transferred to these containers, but these variables are passed to other components without problems if you need additional information, I am ready to provide airbytehq/airbyte
  • l

    Leo Salayog

    10/29/2024, 1:34 PM
    Hello Anyone here had issue experience with ClickUp's connection? currently the when getting the ticket, there's an 400 error when retrieving ticket data.
    • 1
    • 1
  • r

    Rytis Zolubas

    10/29/2024, 2:57 PM
    Hello, I am having problems with Google Ads connector 3.7.9 version. I am trying to have this custom query
    SELECT metrics.clicks, metrics.conversions, metrics.ctr, metrics.cost_micros, campaign.id, segments.date, segments.geo_target_state, segments.geo_target_region FROM geographic_view
    and I get error:
    TypeError: 'GAQL' object is not subscriptable
    any thoughts or ideas? does not work with any query. Thanks!
    w
    • 2
    • 2
  • g

    GUANGYU QU

    10/29/2024, 4:54 PM
    Hi Team It seems that there is a bug when using private repository. Same snowflake image, but i got below error in stream sync when i use private repository(custom connector). Does anyone had some problem like me? APP Version 1.1.0.
    Failure in destination: You must upgrade your platform version to use this connector version. Either downgrade your connector or upgrade platform to 0.63.7
  • u

    user

    10/29/2024, 5:48 PM
    #47946 [Connector request] Source connector for Konnektive CRM New discussion created by pranasas-karutis This is a suggestion to build a source connector for Konnektive CRM (https://crm.konnektive.com/ ) Checklist: • The connector doesn't exist in Airbyte Catalog. • A typical use case would be getting orders, transactions, customers, purchases and daily summary data from Konnektive CRM • The service have public API docs at: https://apidocs.konnektive.com/ • The API supports pagination; incremental runs would be possible. • Icon in .svg format DESCRIPTION This is a CRM tool. It's API would be used for reporting orders data. API KEY To get API key: https://help.konnektive.com/konnektive-crm/admin-setup/create-an-api-user airbytehq/airbyte
  • u

    user

    10/29/2024, 6:00 PM
    #47949 [Connector request] Source connector for Aftership New discussion created by pranasas-karutis This is a suggestion to build a source connector for Aftership (https://www.aftership.com/) Checklist: The connector doesn't exist in Airbyte Catalog. A typical use case would be getting trackings data from Afterhip The service have public API docs at: https://www.aftership.com/docs/tracking/quickstart/api-quick-start The API supports pagination; incremental runs would be possible. Icon in .svg format exists. DESCRIPTION This is one of the most popular post-purchase success, i.e., shipments tracking/returns tool. It's API would be used for reporting trackings data. API KEY To get API key: https://www.aftership.com/docs/shipping/quickstart/authentication airbytehq/airbyte
  • e

    Edouard G

    10/29/2024, 6:54 PM
    Hello there ! I just updated airbyte to be used with abctl and docker on a EC2 instance. My S3 connector does not work anymore, it asks me to define a AWS_ASSUME_ROLE_EXTERNAL_ID environnement variable : how can I achieve this with this specific stack ? (ec2 => abctl => docker => kubernetes)
    j
    p
    m
    • 4
    • 7
  • h

    Harel Oshri

    10/29/2024, 7:42 PM
    Hello everyone, I'm trying to set up Iceberg destination with Polaris as the REST Catalog. Is there anyone who have managed to do that? I'm not sure how can I pass the scope and set up the destination. Thank you!
  • r

    ryusei arai

    10/30/2024, 5:03 AM
    Hello everyone, I want to migrate Airbyte (v0.50.33) running on Docker Compose to GKE. I plan to first deploy v0.50.33 on GKE, perform the DB migration, and then upgrade to the latest version. However, I am unable to create a Helm chart for version 0.50.33.
    j
    • 2
    • 3
  • b

    Barry-Lee Lodewyks

    10/30/2024, 9:09 AM
    Hi All I’m trying to move from Airbyte(0.40.4) to the lovely new Airbyte(1.0) but historically I was able to
    EXPORT
    my
    CONFIGURATION
    from the old version and
    IMPORT
    it into the new version. I don’t see an option to
    IMPORT CONFIGURATION
    into Airbyte(1.0). How do I transfer all my connectors, sources and destinations from old to new version without losing the state data. The minute I create the same connector in Airbyte(1.0) with the same ‘previous’ setup it will wipe all the data in the destination table and import only what it finds in the source. Any help assistance here would be greatly appreciated 🙏
    p
    • 2
    • 2
  • n

    ns

    10/30/2024, 9:44 AM
    Hey! I would like to migrate an Airbyte installation version 0.40.17 from an old EKS cluster to a new one. We are using a Postgres RDS to keep Airbyte state. Are there any best practices here? Can we run it side by side for a while, while connected to the same DB, or should we first completely drain the old cluster? I would appreciate if anyone has any experience and can share any insights Thanks!
    p
    • 2
    • 2
  • k

    Kaustav Ghosh

    10/30/2024, 10:35 AM
    I am running the code formatter wrong
    Copy code
    airbyte-ci format fix all
  • j

    Juan Cernadas

    10/30/2024, 2:10 PM
    Hi, I migrated from a docker deployment to abctl, and I don't know how to configure the env variable
    TEMPORAL_HISTORY_RETENTION_IN_DAYS
    . This is very important for me to save disk space. Thanks a lot!
    u
    • 2
    • 2
  • e

    Eyuel Muse

    10/30/2024, 3:04 PM
    Hi All, I am setting up Airbyte cloud with Terraform and just run into an issue when trying to setup a connection and selecting fields. Not sure what I am doing wrong. Has anyone run into the a similar issue?
    Copy code
    resource "airbyte_connection" "zendesk" {
      data_residency                       =
      destination_id                       = 
      name                                 = "Zendesk"
      non_breaking_schema_updates_behavior = "propagate_columns"
      prefix                               = "zendesk_"
      source_id                            = 
      for_each =  { for stream in local.zendesk_streams.streams : stream.name => stream }
      configurations = {
        streams = [{
          name = each.value.name
          cursor_field = each.value.cursor_field
          selected_fields = [
            {
              field_path = each.value.selected_fields
            }
          ]
          sync_mode = each.value.sync_mode
        }]
      }
    }
    The template renders correctly, but I get a
    HTTP/2.0 400 Bad Request
    response.
    u
    p
    • 3
    • 10
  • d

    Damien Querbes

    10/30/2024, 4:23 PM
    Hello 👋 Has anyone managed to use <https://airflow.apache.org/docs/apache-airflow-providers-airbyte/stable/index.html|Airflow <> Airbyte connector> (version 4.0.0)? Despite my connection passed the `connection_test`in Airflow, I can’t refer to it in my dag. I keep having auth errors (cf thread).
    • 1
    • 2
  • c

    Carlos Bernal Carvajal

    10/30/2024, 5:28 PM
    Friendly refresher for this question simple smile
  • c

    Carolina Buckler

    10/30/2024, 6:50 PM
    getting this error using abctl local install
    Copy code
    ERROR   Failed to install airbyte/airbyte Helm Chart
      ERROR   Unable to install Airbyte locally
      ERROR   unable to install airbyte chart: unable to install helm: failed to create resource: <http://Ingress.networking.k8s.io|Ingress.networking.k8s.io> "airbyte-abctl-webapp" is invalid: spec: Invalid value: []networking.IngressRule(nil): either `defaultBackend` or `rules` must be specified
    p
    • 2
    • 12
  • s

    srinivasa

    10/30/2024, 7:59 PM
    getting this error using helm install in AWS EKS ..
    a
    • 2
    • 6
  • u

    user

    10/30/2024, 10:03 PM
    Comment on #46355 Configure TEMPORAL_HISTORY_RETENTION_IN_DAYS in abctl deployment Discussion answered by cernadasjuan Solved: add this to the
    values.yaml
    file global: env_vars: TEMPORAL_HISTORY_RETENTION_IN_DAYS: 7 airbytehq/airbyte
  • t

    Talha Naeem

    10/31/2024, 5:56 AM
    Hello Everyone, I am using s3 as storage for logs, and I want to configure credentials for accessing it. I have passed the parameters in the values files as below, but when airbyte server and workers pods are created we are getting this issue.
    Copy code
    Error: couldn't find key s3-access-key-id in Secret dev/airbyte-airbyte-secrets
    As In these deployments the env vars are configured from template as:
    Copy code
    - name: STORAGE_BUCKET_WORKLOAD_OUTPUT
              valueFrom:
                configMapKeyRef:
                  name: airbyte-airbyte-env
                  key: STORAGE_BUCKET_WORKLOAD_OUTPUT
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: airbyte-airbyte-secrets
                  key: s3-access-key-id
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: airbyte-airbyte-secrets
                  key: s3-secret-access-key
    While in the secret we have key values are following:
    Copy code
    apiVersion: v1
    kind: Secret
    metadata:
      name: airbyte-airbyte-secrets
      namespace: dev
    data:
      AWS_ACCESS_KEY_ID: <keyid>
      AWS_SECRET_ACCESS_KEY: <key-sec>
      DATABASE_PASSWORD: <db-pass>
      DATABASE_USER: <user>
    So in the deployment yaml of aibryte server and worker, it should be like this:
    Copy code
    - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: airbyte-airbyte-secrets
                  key: AWS_ACCESS_KEY_ID
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: airbyte-airbyte-secrets
                  key: AWS_SECRET_ACCESS_KEY
    Please let me know if I am missing anything here? or is it really a bug in the helm that needs to be fixed in its template? Thanks in advance! Airbyte version: v1.0.0 Helm version: 0.634.3
    p
    a
    • 3
    • 11
  • b

    Ben Tennyson

    10/31/2024, 6:20 AM
    Hello Everyone, Currently I'm trying to establish a connection between a MSSQL server and a local Postgres server. After when I clicked "Sync Now", the connection synced for around 30 minutes then returned this error:
    Copy code
    message='io.airbyte.workers.exception.WorkloadLauncherException: io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workers.exception.ResourceConstraintException: Unable to start the REPLICATION pod. This may be due to insufficient system resources. Please check available resources and try again.
    	at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46)
    	at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38)
    	at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source)
    	at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source)
    	at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456)
    	at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:134)
    	at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61)
    	at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44)
    	at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:143)
    	at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source)
    	at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)
    	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)
    	at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367)
    	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)
    	at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193)
    	at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:4560)
    	at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:126)
    	at reactor.core.scheduler.ImmediateScheduler$ImmediateSchedulerWorker.schedule(ImmediateScheduler.java:84)
    	at reactor.core.publisher.MonoSubscribeOn.subscribeOrReturn(MonoSubscribeOn.java:55)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:4560)
    	at reactor.core.publisher.Mono.subscribeWith(Mono.java:4642)
    	at reactor.core.publisher.Mono.subscribe(Mono.java:4403)
    	at io.airbyte.workload.launcher.pipeline.LaunchPipeline.accept(LaunchPipeline.kt:50)
    	at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:28)
    	at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:12)
    	at io.airbyte.commons.temporal.queue.QueueActivityImpl.consume(Internal.kt:87)
    	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
    	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
    	at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64)
    	at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43)
    	at io.temporal.common.interceptors.ActivityInboundCallsInterceptorBase.execute(ActivityInboundCallsInterceptorBase.java:39)
    	at io.temporal.opentracing.internal.OpenTracingActivityInboundCallsInterceptor.execute(OpenTracingActivityInboundCallsInterceptor.java:78)
    	at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107)
    	at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124)
    	at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278)
    	at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243)
    	at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216)
    	at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105)
    	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
    	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
    	at java.base/java.lang.Thread.run(Thread.java:1583)
    Caused by: io.airbyte.workers.exception.ResourceConstraintException: Unable to start the REPLICATION pod. This may be due to insufficient system resources. Please check available resources and try again.
    	at io.airbyte.workload.launcher.pods.KubePodClient.waitForPodInitComplete(KubePodClient.kt:313)
    	at io.airbyte.workload.launcher.pods.KubePodClient.launchReplication(KubePodClient.kt:105)
    	at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:47)
    	at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:24)
    	at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:42)
    	... 53 more
    Caused by: io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [900000] milliseconds for [Pod] with name:[replication-job-6-attempt-4] in namespace [airbyte-abctl].
    	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:944)
    	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:98)
    	at io.fabric8.kubernetes.client.extension.ResourceAdapter.waitUntilCondition(ResourceAdapter.java:175)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher$waitForPodInitComplete$initializedPod$1.invoke(KubePodLauncher.kt:83)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher$waitForPodInitComplete$initializedPod$1.invoke(KubePodLauncher.kt:79)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand$lambda$2(KubePodLauncher.kt:335)
    	at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243)
    	at dev.failsafe.Functions.lambda$get$0(Functions.java:46)
    	at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74)
    	at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187)
    	at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376)
    	at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand(KubePodLauncher.kt:335)
    	at io.airbyte.workload.launcher.pods.KubePodLauncher.waitForPodInitComplete(KubePodLauncher.kt:79)
    	at io.airbyte.workload.launcher.pods.KubePodClient.waitForPodInitComplete(KubePodClient.kt:308)
    	... 57 more
    ', type='java.lang.RuntimeException', nonRetryable=false
  • b

    Ben Tennyson

    10/31/2024, 6:20 AM
    What's happening here and how can I fix it?
  • l

    Lienge

    10/31/2024, 9:11 AM
    Hi, i am currently running airbyte version 0.4 and i want to upgrade to 0.6, currently we using docker compose, how can i move from that to abctl and with the new version 0.6? please i do need some guide.
  • a

    Amjad Syed

    10/31/2024, 10:25 AM
    I have installed airbyte on multiple envifonment
  • a

    Amjad Syed

    10/31/2024, 10:25 AM
    is there a way i can migrate the pipeline from lower environment to higher?
1...241242243244245Latest