https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • e

    ErikV

    09/06/2024, 12:26 PM
    @kapa.ai I a mysql connectors broken, because of this error: 2024-09-06 090202 platform > Checking if airbyte/source-mysql:3.7.1 exists... 2024-09-06 090203 platform > airbyte/source-mysql:3.7.1 not found locally. Attempting to pull the image... 2024-09-06 090203 platform > Image does not exist. I am on version airbyte version: 0.63.3 and the mysql source connector is: 3.7.1. What are possible solutions?
    k
    • 2
    • 1
  • t

    tobias

    09/06/2024, 12:28 PM
    is there a default workspace already created when installing the helm chart?
    k
    • 2
    • 1
  • m

    Matheus Dantas

    09/06/2024, 12:32 PM
    Is there any plan to provide a Airbyte helm-chart compatible with Azure?
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 1:58 PM
    i'm getting doesn't match $setElementOrder list: error for worker in helm deployment. I'm overriding some values and placing some in the extraEnv option for a worker. do i need all of the values here:
    Copy code
    [map[name:AIRBYTE_VERSION] map[name:CONFIG_ROOT] map[name:CONTAINER_ORCHESTRATOR_SECRET_MOUNT_PATH] map[name:CONTAINER_ORCHESTRATOR_SECRET_NAME] map[name:LOG4J_CONFIGURATION_FILE] map[name:MICROMETER_METRICS_ENABLED] map[name:MICROMETER_METRICS_STATSD_FLAVOR] map[name:SEGMENT_WRITE_KEY] map[name:STATSD_HOST] map[name:STATSD_PORT] map[name:TRACKING_STRATEGY] map[name:WORKSPACE_DOCKER_MOUNT] map[name:WORKSPACE_ROOT] map[name:LOCAL_ROOT] map[name:WEBAPP_URL] map[name:TEMPORAL_HOST] map[name:TEMPORAL_WORKER_PORTS] map[name:LOG_LEVEL] map[name:JOB_KUBE_NAMESPACE] map[name:JOB_KUBE_SERVICEACCOUNT] map[name:JOB_MAIN_CONTAINER_CPU_REQUEST] map[name:JOB_MAIN_CONTAINER_CPU_LIMIT] map[name:JOB_MAIN_CONTAINER_MEMORY_REQUEST] map[name:JOB_MAIN_CONTAINER_MEMORY_LIMIT] map[name:INTERNAL_API_HOST] map[name:WORKLOAD_API_HOST] map[name:WORKLOAD_API_BEARER_TOKEN] map[name:CONFIGS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION] map[name:JOBS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION] map[name:METRIC_CLIENT] map[name:OTEL_COLLECTOR_ENDPOINT] map[name:ACTIVITY_MAX_ATTEMPT] map[name:ACTIVITY_INITIAL_DELAY_BETWEEN_ATTEMPTS_SECONDS] map[name:ACTIVITY_MAX_DELAY_BETWEEN_ATTEMPTS_SECONDS] map[name:WORKFLOW_FAILURE_RESTART_DELAY_SECONDS] map[name:SHOULD_RUN_NOTIFY_WORKFLOWS] map[name:MICRONAUT_ENVIRONMENTS] map[name:WORKLOAD_LAUNCHER_ENABLED] map[name:WORKLOAD_API_SERVER_ENABLED] map[name:SECRET_PERSISTENCE] map[name:STORAGE_TYPE] map[name:STORAGE_BUCKET_ACTIVITY_PAYLOAD] map[name:STORAGE_BUCKET_LOG] map[name:STORAGE_BUCKET_STATE] map[name:STORAGE_BUCKET_WORKLOAD_OUTPUT] map[name:S3_PATH_STYLE_ACCESS] map[name:AWS_ACCESS_KEY_ID] map[name:AWS_SECRET_ACCESS_KEY] map[name:MINIO_ENDPOINT] map[name:GOOGLE_APPLICATION_CREDENTIALS] map[name:DATABASE_HOST] map[name:DATABASE_PORT] map[name:DATABASE_DB] map[name:DATABASE_USER] map[name:DATABASE_PASSWORD] map[name:DATABASE_URL] map[name:CONTAINER_ORCHESTRATOR_ENABLED] map[name:STATE_STORAGE_GCS_BUCKET_NAME] map[name:STATE_STORAGE_GCS_APPLICATION_CREDENTIALS] map[name:CONTAINER_ORCHESTRATOR_SECRET_NAME] map[name:CONTAINER_ORCHESTRATOR_SECRET_MOUNT_PATH]]
    k
    • 2
    • 3
  • r

    Rahul Kumar

    09/06/2024, 2:03 PM
    @kapa.ai i am getting Destination process is still alive, cannot retrieve exit value
    k
    • 2
    • 4
  • k

    Kaan Erdoğan

    09/06/2024, 2:18 PM
    When creating an airbyte from 0 with the same information as another airbyte incremental connection, I want the newly created incremental connection to continue where I left off. How can I ensure that the cursor part continues where it left off?
    k
    • 2
    • 6
  • j

    Jon Seymour

    09/06/2024, 2:24 PM
    I’ve set up a new abctl instance on a t3.xlarge instance which has 4 CPU and 16GB of ram, but when I try to sync some tables, the read and write jobs fail to start because of insufficient CPU:
    Copy code
    Events:
      Type     Reason            Age   From               Message
      ----     ------            ----  ----               -------
      Warning  FailedScheduling  26s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
    The node reports this:
    Copy code
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests      Limits
      --------           --------      ------
      cpu                3250m (81%)   4300m (107%)
      memory             3452Mi (21%)  13702Mi (86%)
      ephemeral-storage  0 (0%)        0 (0%)
      hugepages-1Gi      0 (0%)        0 (0%)
      hugepages-2Mi      0 (0%)        0 (0%)
    I tried to overcommit the CPU by adjusting the resources like this:
    Copy code
    jobs:
        resources:
          limits:
            cpu: "4"
            memory: 12Gi
    Nothing else is running on the instance. The actual CPU in use is low:
    Copy code
    top - 14:23:49 up 1:30, 1 user, load average: 0.41, 0.59, 0.61
    So, what do I need to do to encourage k8s to actually dispatch these tasks?
    k
    u
    • 3
    • 4
  • d

    Daniel Holleran

    09/06/2024, 2:49 PM
    what could be the cause of the following error:
    Copy code
    ERROR i.a.w.l.p.h.FailureHandler(apply):39 - Pipeline Error
    io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workload.launcher.pods.KubeClientException: Destination pod failed to start within allotted timeout of 1145 seconds. (Timed out waiting for [1140000] milliseconds for [Pod] with name:[null] in namespace [airbyte].)
            at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456) ~[micronaut-inject-4.5.4.jar:4.5.4]
            at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:129) ~[micronaut-aop-4.5.4.jar:4.5.4]
            at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61) ~[io.airbyte.airbyte-metrics-metrics-lib-0.64.1.jar:?]
            at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44) ~[io.airbyte.airbyte-metrics-metrics-lib-0.64.1.jar:?]
            at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:138) ~[micronaut-aop-4.5.4.jar:4.5.4]
            at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Mono.subscribe(Mono.java:4552) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:126) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.scheduler.ImmediateScheduler$ImmediateSchedulerWorker.schedule(ImmediateScheduler.java:84) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.MonoSubscribeOn.subscribeOrReturn(MonoSubscribeOn.java:55) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Mono.subscribe(Mono.java:4552) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Mono.subscribeWith(Mono.java:4634) ~[reactor-core-3.6.8.jar:3.6.8]
            at reactor.core.publisher.Mono.subscribe(Mono.java:4395) ~[reactor-core-3.6.8.jar:3.6.8]
            at io.airbyte.workload.launcher.pipeline.LaunchPipeline.accept(LaunchPipeline.kt:50) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:28) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:12) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.commons.temporal.queue.QueueActivityImpl.consume(Internal.kt:87) ~[io.airbyte-airbyte-commons-temporal-core-0.64.1.jar:?]
            at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?]
            at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]
            at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.common.interceptors.ActivityInboundCallsInterceptorBase.execute(ActivityInboundCallsInterceptorBase.java:39) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.opentracing.internal.OpenTracingActivityInboundCallsInterceptor.execute(OpenTracingActivityInboundCallsInterceptor.java:78) ~[temporal-opentracing-1.22.3.jar:?]
            at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216) ~[temporal-sdk-1.22.3.jar:?]
            at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105) ~[temporal-sdk-1.22.3.jar:?]
            at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
            at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
            at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
    Caused by: io.airbyte.workload.launcher.pods.KubeClientException: Destination pod failed to start within allotted timeout of 1145 seconds. (Timed out waiting for [1140000] milliseconds for [Pod] with name:[null] in namespace [airbyte].)
            at io.airbyte.workload.launcher.pods.KubePodClient.waitDestinationReadyOrTerminalInit(KubePodClient.kt:255) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodClient.launchReplicationPodTriplet(KubePodClient.kt:132) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodClient.launchReplication(KubePodClient.kt:73) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:43) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:24) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:42) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            ... 53 more
    Caused by: io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [1140000] milliseconds for [Pod] with name:[null] in namespace [airbyte].
            at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:946) ~[kubernetes-client-6.12.1.jar:?]
            at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:98) ~[kubernetes-client-6.12.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodLauncher$waitForPodReadyOrTerminal$1.invoke(KubePodLauncher.kt:199) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodLauncher$waitForPodReadyOrTerminal$1.invoke(KubePodLauncher.kt:194) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand$lambda$0(KubePodLauncher.kt:307) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243) ~[failsafe-3.3.2.jar:3.3.2]
            at dev.failsafe.Functions.lambda$get$0(Functions.java:46) ~[failsafe-3.3.2.jar:3.3.2]
            at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74) ~[failsafe-3.3.2.jar:3.3.2]
            at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187) ~[failsafe-3.3.2.jar:3.3.2]
            at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376) ~[failsafe-3.3.2.jar:3.3.2]
            at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112) ~[failsafe-3.3.2.jar:3.3.2]
            at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand(KubePodLauncher.kt:307) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodLauncher.waitForPodReadyOrTerminal(KubePodLauncher.kt:194) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodClient.waitDestinationReadyOrTerminalInit(KubePodClient.kt:252) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodClient.launchReplicationPodTriplet(KubePodClient.kt:132) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pods.KubePodClient.launchReplication(KubePodClient.kt:73) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:43) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:24) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:42) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
            ... 53 more
    k
    • 2
    • 2
  • j

    Jonathan Golden

    09/06/2024, 2:50 PM
    i got airbytes to launch with helm configuration to use GCS insteade of minio(at least i believe so), however, when i go to sync i can't get my pods to start?
    Copy code
    io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-quickbooks-check-315-4-vxzww. at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source) at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:129) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:138) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193) at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:126) at reactor.core.scheduler.ImmediateScheduler$ImmediateSchedulerWorker.schedule(ImmediateScheduler.java:84) at reactor.core.publisher.MonoSubscribeOn.subscribeOrReturn(MonoSubscribeOn.java:55) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.Mono.subscribeWith(Mono.java:4634) at reactor.core.publisher.Mono.subscribe(Mono.java:4395) at io.airbyte.workload.launcher.pipeline.LaunchPipeline.accept(LaunchPipeline.kt:50) at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:28) at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:12) at io.airbyte.commons.temporal.queue.QueueActivityImpl.consume(Internal.kt:87) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64) at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43) at io.temporal.common.interceptors.ActivityInboundCallsInterceptorBase.execute(ActivityInboundCallsInterceptorBase.java:39) at io.temporal.opentracing.internal.OpenTracingActivityInboundCallsInterceptor.execute(OpenTracingActivityInboundCallsInterceptor.java:78) at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107) at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216) at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-quickbooks-check-315-4-vxzww. at io.airbyte.workload.launcher.pods.KubePodClient.launchConnectorWithSidecar(KubePodClient.kt:352) at io.airbyte.workload.launcher.pods.KubePodClient.launchCheck(KubePodClient.kt:279) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:44) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:24) at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:42) ... 53 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PATCH at: <https://34.118.224.1:443/api/v1/namespaces/airbyte-staging/pods/source-quickbooks-check-315-4-vxzww?fieldManager=fabric8>. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.KubernetesClientException.copyAsCause(KubernetesClientException.java:238) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:507) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:524) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:419) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:397) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handlePatch(BaseOperation.java:764) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.lambda$patch$2(HasMetadataOperation.java:231) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.patch(HasMetadataOperation.java:236) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.patch(HasMetadataOperation.java:251) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.serverSideApply(BaseOperation.java:1179) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.serverSideApply(BaseOperation.java:98) at io.airbyte.workload.launcher.pods.KubePodLauncher$create$1.invoke(KubePodLauncher.kt:57) at io.airbyte.workload.launcher.pods.KubePodLauncher$create$1.invoke(KubePodLauncher.kt:52) at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand$lambda$0(KubePodLauncher.kt:307) at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243) at dev.failsafe.Functions.lambda$get$0(Functions.java:46) at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74) at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187) at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376) at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112) at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand(KubePodLauncher.kt:307) at io.airbyte.workload.launcher.pods.KubePodLauncher.create(KubePodLauncher.kt:52) at io.airbyte.workload.launcher.pods.KubePodClient.launchConnectorWithSidecar(KubePodClient.kt:349) ... 57 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PATCH at: <https://34.118.224.1:443/api/v1/namespaces/airbyte-staging/pods/source-quickbooks-check-315-4-vxzww?fieldManager=fabric8>. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:660) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:640) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:589) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:549) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.http.StandardHttpClient.lambda$completeOrCancel$10(StandardHttpClient.java:142) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.http.ByteArrayBodyHandler.onBodyDone(ByteArrayBodyHandler.java:51) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$OkHttpAsyncBody.doConsume(OkHttpClientImpl.java:136) ... 3 more
    k
    • 2
    • 1
  • t

    Tigran Zalyan

    09/06/2024, 2:52 PM
    Hi everyone! I just deployed Airbyte to GKE using helm. When I’m trying to create a source the request times out with 502 error code every time. I’m using an external database. Everything works fine locally. Cluster configuration: 3 nodes Each node has 8gb ram and 2vcpus Has anyone faced this issue before?
    k
    • 2
    • 1
  • m

    Maksims Voikovs

    09/06/2024, 2:52 PM
    @kapa.ai Getting following error from Kafka source on attempt to create Airbyte connection. How to fix this?
    Copy code
    Internal message: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 3:26 PM
    Copy code
    io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-quickbooks-check-315-4-vxzww. at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source) at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:129) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:138) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193) at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:126) at reactor.core.scheduler.ImmediateScheduler$ImmediateSchedulerWorker.schedule(ImmediateScheduler.java:84) at reactor.core.publisher.MonoSubscribeOn.subscribeOrReturn(MonoSubscribeOn.java:55) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.Mono.subscribeWith(Mono.java:4634) at reactor.core.publisher.Mono.subscribe(Mono.java:4395) at io.airbyte.workload.launcher.pipeline.LaunchPipeline.accept(LaunchPipeline.kt:50) at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:28) at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:12) at io.airbyte.commons.temporal.queue.QueueActivityImpl.consume(Internal.kt:87) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:64) at io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:43) at io.temporal.common.interceptors.ActivityInboundCallsInterceptorBase.execute(ActivityInboundCallsInterceptorBase.java:39) at io.temporal.opentracing.internal.OpenTracingActivityInboundCallsInterceptor.execute(OpenTracingActivityInboundCallsInterceptor.java:78) at io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:107) at io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:124) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:278) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:243) at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:216) at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:105) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-quickbooks-check-315-4-vxzww. at io.airbyte.workload.launcher.pods.KubePodClient.launchConnectorWithSidecar(KubePodClient.kt:352) at io.airbyte.workload.launcher.pods.KubePodClient.launchCheck(KubePodClient.kt:279) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:44) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.applyStage(LaunchPodStage.kt:24) at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:42) ... 53 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PATCH at: <https://34.118.224.1:443/api/v1/namespaces/airbyte-staging/pods/source-quickbooks-check-315-4-vxzww?fieldManager=fabric8>. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.KubernetesClientException.copyAsCause(KubernetesClientException.java:238) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:507) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:524) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:419) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handlePatch(OperationSupport.java:397) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handlePatch(BaseOperation.java:764) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.lambda$patch$2(HasMetadataOperation.java:231) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.patch(HasMetadataOperation.java:236) at io.fabric8.kubernetes.client.dsl.internal.HasMetadataOperation.patch(HasMetadataOperation.java:251) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.serverSideApply(BaseOperation.java:1179) at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.serverSideApply(BaseOperation.java:98) at io.airbyte.workload.launcher.pods.KubePodLauncher$create$1.invoke(KubePodLauncher.kt:57) at io.airbyte.workload.launcher.pods.KubePodLauncher$create$1.invoke(KubePodLauncher.kt:52) at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand$lambda$0(KubePodLauncher.kt:307) at dev.failsafe.Functions.lambda$toCtxSupplier$11(Functions.java:243) at dev.failsafe.Functions.lambda$get$0(Functions.java:46) at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74) at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187) at dev.failsafe.FailsafeExecutor.call(FailsafeExecutor.java:376) at dev.failsafe.FailsafeExecutor.get(FailsafeExecutor.java:112) at io.airbyte.workload.launcher.pods.KubePodLauncher.runKubeCommand(KubePodLauncher.kt:307) at io.airbyte.workload.launcher.pods.KubePodLauncher.create(KubePodLauncher.kt:52) at io.airbyte.workload.launcher.pods.KubePodClient.launchConnectorWithSidecar(KubePodClient.kt:349) ... 57 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: PATCH at: <https://34.118.224.1:443/api/v1/namespaces/airbyte-staging/pods/source-quickbooks-check-315-4-vxzww?fieldManager=fabric8>. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:660) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:640) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:589) at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:549) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.http.StandardHttpClient.lambda$completeOrCancel$10(StandardHttpClient.java:142) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.http.ByteArrayBodyHandler.onBodyDone(ByteArrayBodyHandler.java:51) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179) at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$OkHttpAsyncBody.doConsume(OkHttpClientImpl.java:136) ... 3 more
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 3:34 PM
    i have airbyte oss deployed through helm on GKE, I overrode the storage settings to use GCS. It is writings logs to the google bucket. I am unable to start worker pods though to sync:
    Copy code
    io.airbyte.workload.launcher.pipeline.stages.model.StageError: : io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-quickbooks-check-316-4-shdsw.
    	at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) ~[io.airbyte-airbyte-workload-launcher-0.64.1.jar:?]
    ....
    Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: <https://34.118.224.1:443/api/v1/namespaces/airbyte-staging/pods?labelSelector=auto_id%3D41f450ea-041c-4283-9137-54070666d184>. Message: Unauthorized. Received status: Status(apiVersion=v1, code=401, details=null, kind=Status, message=Unauthorized, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Unauthorized, status=Failure, additionalProperties={}).
    is there an area i can look at?
    k
    • 2
    • 10
  • d

    Daniel Holleran

    09/06/2024, 3:47 PM
    @kapa.ai what could have caused the following error from the temporal pod in kubernetes?
    Copy code
    {
      "level": "error",
      "ts": "2024-09-06T14:05:11.139Z",
      "msg": "Fail to process task",
      "shard-id": 3,
      "address": "172.39.5.221:7234",
      "component": "transfer-queue-processor",
      "wf-namespace-id": "c547e09d-9821-4b45-ac2b-a2a80deed56b",
      "wf-id": "sync_72128",
      "wf-run-id": "c043b796-fdb2-4078-8f16-dbd67d60a005",
      "queue-task-id": 1848673375,
      "queue-task-visibility-timestamp": "2024-09-06T14:05:10.040Z",
      "queue-task-type": "TransferWorkflowTask",
      "queue-task": {
        "NamespaceID": "c547e09d-9821-4b45-ac2b-a2a80deed56b",
        "WorkflowID": "sync_72128",
        "RunID": "c043b796-fdb2-4078-8f16-dbd67d60a005",
        "VisibilityTimestamp": "2024-09-06T14:05:10.040677638Z",
        "TaskID": 1848673375,
        "TaskQueue": "1@data-prod-airbyte-worker-c977764d7-k24gh:e16d7c32-99a0-48b9-a2d7-80ac5695dce7",
        "ScheduledEventID": 35,
        "Version": 0
      },
      "wf-history-event-id": 35,
      "error": "context deadline exceeded",
      "unexpected-error-attempts": 1,
      "lifecycle": "ProcessingFailed",
      "logging-call-at": "lazy_logger.go:68",
      "stacktrace": "<http://go.temporal.io/server/common/log.(*zapLogger).Error|go.temporal.io/server/common/log.(*zapLogger).Error>\n\t/home/builder/temporal/common/log/zap_logger.go:156\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:421\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask.func1\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:224\ngo.temporal.io/server/common/backoff.ThrottleRetry.func1\n\t/home/builder/temporal/common/backoff/retry.go:117\ngo.temporal.io/server/common/backoff.ThrottleRetryContext\n\t/home/builder/temporal/common/backoff/retry.go:143\ngo.temporal.io/server/common/backoff.ThrottleRetry\n\t/home/builder/temporal/common/backoff/retry.go:118\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).executeTask\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:233\ngo.temporal.io/server/common/tasks.(*FIFOScheduler[...]).processTask\n\t/home/builder/temporal/common/tasks/fifo_scheduler.go:211"
    }
    k
    • 2
    • 1
  • t

    Túlio Lima

    09/06/2024, 3:52 PM
    I am developing a source connector with python-cdk. How to retry when I receive a 500 error and ignore after retrying 3 times?
    k
    • 2
    • 3
  • a

    Amine Salhi

    09/06/2024, 4:16 PM
    @kapa.ai I have an airbyte opensource v0.53.52 deployed on kubernetes using helm, it has been working fine since couple of months and few days ago it started throwing the following error. Any idea?
    Copy code
    message='io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Running the launcher replication-orchestrator failed', type='java.lang.RuntimeException', nonRetryable=false
    k
    • 2
    • 1
  • m

    Maksims Voikovs

    09/06/2024, 4:18 PM
    @kapa.ai how to configure SASL JAAS Config with SCRAM-SHA-256 for Kafka source
    k
    • 2
    • 1
  • m

    Maksims Voikovs

    09/06/2024, 4:33 PM
    @kapa.ai is this a valid JAAS config in Kafka source connector?
    Copy code
    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
    username="sdfdsfsdf"
    password="sdfsdfssdfs";
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 5:20 PM
    ok getting a pod error: message: MountVolume.SetUp failed for volume "airbyte-secret" : secret "airbyte-config-secrets" not found where do i set these in the yaml config for pods or workers?
    k
    • 2
    • 1
  • d

    Daniel Holleran

    09/06/2024, 6:06 PM
    the following logs are from the the source postgres pod. was is causing the issue?
    Copy code
    2024/09/06 16:59:41 socat[8] N reading from and writing to stdio
    2024/09/06 16:59:41 socat[8] N opening connection to AF=2 172.39.6.239:9878
    2024/09/06 16:59:41 socat[8] N successfully connected from local address AF=2 172.39.2.112:54156
    2024/09/06 16:59:41 socat[8] N starting data transfer loop with FDs [0,1] and [5,5]
    2024/09/06 17:18:30 socat[8] E write(5, 0x7f7126ebf000, 8192): Connection reset by peer
    2024/09/06 17:18:30 socat[8] N exit(1)
    k
    • 2
    • 1
  • c

    Curtis

    09/06/2024, 6:11 PM
    @kapa.ai I keep seeing error messages like this in my logs: Storage backend has reached its minimum free drive threshold. Please delete a few objects to proceed. (Service: S3, Status Code: 507, Request ID: 17F2B9AE430988EC, Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8) 2024-09-06 170733 ERROR c.v.l.BufferPublisher(endPublish):69 - Cannot end publish with com.van.logging.aws.S3PublishHelper@31ae562 due to error java.lang.RuntimeException: Cannot end publishing: Cannot publish to S3: Storage backend has reached its minimum free drive threshold. Please delete a few objects to proceed. (Service: Amazon S3; Status Code: 507; Error Code: XMinioStorageFull; Request ID: 17F2B6AB24630BBA; S3 Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8; Proxy: null) at com.van.logging.AbstractFilePublishHelper.end(AbstractFilePublishHelper.java:66) ~[appender-core-5.3.2.jar:?] at com.van.logging.BufferPublisher.endPublish(BufferPublisher.java:67) ~[appender-core-5.3.2.jar:?] at com.van.logging.LoggingEventCache.publishEventsFromFile(LoggingEventCache.java:198) ~[appender-core-5.3.2.jar:?] at com.van.logging.LoggingEventCache.lambda$publishCache$0(LoggingEventCache.java:243) ~[appender-core-5.3.2.jar:?] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?] at java.base/java.lang.Thread.run(Thread.java:1583) [?:?] Caused by: java.lang.RuntimeException: Cannot publish to S3: Storage backend has reached its minimum free drive threshold. Please delete a few objects to proceed. (Service: Amazon S3; Status Code: 507; Error Code: XMinioStorageFull; Request ID: 17F2B6AB24630BBA; S3 Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8; Proxy: null) at com.van.logging.aws.S3PublishHelper.publishFile(S3PublishHelper.java:131) ~[appender-core-5.3.2.jar:?] at com.van.logging.AbstractFilePublishHelper.end(AbstractFilePublishHelper.java:61) ~[appender-core-5.3.2.jar:?] ... 8 more Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Storage backend has reached its minimum free drive threshold. Please delete a few objects to proceed. (Service: Amazon S3; Status Code: 507; Error Code: XMinioStorageFull; Request ID: 17F2B6AB24630BBA; S3 Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8; Proxy: null) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1880) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541) ~[aws-java-sdk-core-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5575) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5522) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:425) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:6656) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1908) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1868) ~[aws-java-sdk-s3-1.12.770.jar:?] at com.van.logging.aws.S3PublishHelper.publishFile(S3PublishHelper.java:126) ~[appender-core-5.3.2.jar:?] at com.van.logging.AbstractFilePublishHelper.end(AbstractFilePublishHelper.java:61) ~[appender-core-5.3.2.jar:?] ... 8 more What s3 backend is being referred to? How can I troubleshoot this?
    k
    • 2
    • 6
  • c

    Curtis

    09/06/2024, 6:14 PM
    @kapa.ai I keep seeing this error as well:
    Copy code
    message='io.airbyte.workers.exception.WorkloadLauncherException: io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workload.launcher.pods.KubeClientException: Main container of orchestrator pod failed to start within allotted timeout of 60 seconds.
    How can I change the allotted timeout for starting pods?
    k
    • 2
    • 1
  • j

    Jefferson da Silva Martins

    09/06/2024, 6:16 PM
    Can I help me, about this error ?
    Copy code
    Message: HikariPool-1 - Connection is not available, request timed out after 60001ms (total=0, active=0, idle=0, waiting=0)
    Create a soucer in MySQL
    k
    • 2
    • 1
  • g

    gonsbi

    09/06/2024, 6:18 PM
    I receive a
    {"message":"Jwt is missing","code":401}
    when i tried to create a new google ads source via terraform (in my airbyte oss), can you help me?
    k
    • 2
    • 1
  • l

    Luke Miles

    09/06/2024, 6:45 PM
    I am writing a cdk connector for an api that requires me to logout after we pull data. Is there an event or method for me to do that easily or will I need to switch to using the Stream base class instead of the HttpStream?
    k
    • 2
    • 1
  • k

    KRISHIV GUBBA

    09/06/2024, 7:06 PM
    I've made a connector in connecgtor builder, now how do i make a docker image of it
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 7:23 PM
    pod launcher says: airbyte-config-secrets not found am i supposed to create this ?
    k
    • 2
    • 1
  • j

    Jonathan Golden

    09/06/2024, 7:42 PM
    how do i add GCS storage to yaml config for helm deploy of oss airbyte for workers to launch correctly
    k
    • 2
    • 6
  • c

    Curtis

    09/06/2024, 8:22 PM
    @kapa.ai I need abctl to pull docker images using authentication (otherwise I hit docker rate limits). I've tried using this command to create a kubernetes secret:
    Copy code
    kubectl create secret docker-registry regcred --docker-username=<redacted> --docker-password=<redacted> --docker-email=<redacted>
    error: failed to create secret Post "<https://127.0.0.1:44713/api/v1/namespaces/airbyte-abctl/secrets?fieldManager=kubectl-create&fieldValidation=Strict>": dial tcp 127.0.0.1:44713: connect: connection refused
    k
    • 2
    • 1
  • g

    gonsbi

    09/06/2024, 8:35 PM
    I receive a
    {"message":"Jwt is missing","code":401}
    when i tried to create a new source via terraform (in my airbyte oss), when i see the requests, terraform try to connect to the cloud API, but i have my own instance running on localhost. I also set
    server_url = "<http://localhost.com/v1/>"
    in my airbyte provider, can you help me?
    k
    • 2
    • 1
1...363738...48Latest