https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • a

    Allan Jarina

    11/06/2025, 12:11 AM
    I didnt have luck trying to install airbyte on EC2 via docker-compose or abctl, kind. Ive been switching back and forth which one and still get the same issue regarding bootloader or Helm is using 2.0.1. Any recommendation for installing airbyte in EC2?
    k
    • 2
    • 1
  • p

    Poorna Premachandra

    11/06/2025, 4:43 AM
    @kapa.ai any reason for this cdc based sync failure from rds mysql to snowflake? 2025-11-06 173018 replication-orchestrator INFO sync summary: { "status" : "failed", "recordsSynced" : 0, "bytesSynced" : 0, "startTime" : 1762402474422, "endTime" : 1762403418051, "totalStats" : { "bytesCommitted" : 0, "bytesEmitted" : 6167875156, "destinationStateMessagesEmitted" : 1, "destinationWriteEndTime" : 1762403417991, "destinationWriteStartTime" : 1762402474640, "meanSecondsBeforeSourceStateMessageEmitted" : 0, "maxSecondsBeforeSourceStateMessageEmitted" : 1, "maxSecondsBetweenStateMessageEmittedandCommitted" : 29, "meanSecondsBetweenStateMessageEmittedandCommitted" : 29, "recordsEmitted" : 18861913, "recordsCommitted" : 0, "recordsFilteredOut" : 0, "bytesFilteredOut" : 0, "replicationEndTime" : 1762403418049, "replicationStartTime" : 1762402474422, "sourceReadEndTime" : 0, "sourceReadStartTime" : 1762402474640, "sourceStateMessagesEmitted" : 1 }, "streamStats" : [ { "streamName" : "edge", "streamNamespace" : "graphp", "stats" : { "bytesCommitted" : 0, "bytesEmitted" : 6167875156, "recordsEmitted" : 18861913, "recordsCommitted" : 0, "recordsFilteredOut" : 0, "bytesFilteredOut" : 0 } } ], "performanceMetrics" : { "processFromSource" : { "elapsedTimeInNanos" : 57112729992, "executionCount" : 18861917, "avgExecTimeInNanos" : 3027.9387822563317 }, "readFromSource" : { "elapsedTimeInNanos" : 154360742074, "executionCount" : 18862170, "avgExecTimeInNanos" : 8183.615250737323 }, "processFromDest" : { "elapsedTimeInNanos" : 1504523, "executionCount" : 1, "avgExecTimeInNanos" : 1504523.0 }, "writeToDest" : { "elapsedTimeInNanos" : 878412985999, "executionCount" : 18861914, "avgExecTimeInNanos" : 46570.723734558436 }, "readFromDest" : { "elapsedTimeInNanos" : 943331806713, "executionCount" : 749, "avgExecTimeInNanos" : 1.2594550156381843E9 } } } 2025-11-06 173018 replication-orchestrator INFO failures: [ { "failureOrigin" : "source", "failureType" : "transient_error", "internalMessage" : "java.sql.SQLException: Query execution was interrupted, maximum statement execution time exceeded", "externalMessage" : "MySQL Query Timeout: The sync was aborted because the query took too long to return results, will retry.\nhttps://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html#error_er_query_timeout", "metadata" : { "attemptNumber" : 0, "jobId" : 9774, "from_trace_message" : true, "connector_command" : "read" }, "stacktrace" : "java.sql.SQLException: Query execution was interrupted, maximum statement execution time exceeded\n\tat com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)\n\tat com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:114)\n\tat com.mysql.cj.jdbc.result.ResultSetImpl.next(ResultSetImpl.java:1833)\n\tat io.airbyte.cdk.read.JdbcSelectQuerier$Result.hasNext(SelectQuerier.kt:115)\n\tat io.airbyte.cdk.read.JdbcNonResumablePartitionReader.run(JdbcPartitionReader.kt:141)\n\tat io.airbyte.cdk.read.FeedReader$readPartitionWithResources$4.invokeSuspend(FeedReader.kt:237)\n\tat io.airbyte.cdk.read.FeedReader$readPartitionWithResources$4.invoke(FeedReader.kt)\n\tat io.airbyte.cdk.read.FeedReader$readPartitionWithResources$4.invoke(FeedReader.kt)\n\tat kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturnIgnoreTimeout(Undispatched.kt:72)\n\tat kotlinx.coroutines.TimeoutKt.setupTimeout(Timeout.kt:148)\n\tat kotlinx.coroutines.TimeoutKt.withTimeout(Timeout.kt:43)\n\tat kotlinx.coroutines.TimeoutKt.withTimeout-KLykuaI(Timeout.kt:71)\n\tat io.airbyte.cdk.read.FeedReader.readPartitionWithResources(FeedReader.kt:237)\n\tat io.airbyte.cdk.read.FeedReader.access$readPartitionWithResources(FeedReader.kt:35)\n\tat io.airbyte.cdk.read.FeedReader$asyncReadPartition$readerJob$1.invokeSuspend(FeedReader.kt:193)\n\tat kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)\n\tat kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104)\n\tat kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:111)\n\tat kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:99)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:811)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:715)\n\tat kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:702)\n", "timestamp" : 1762403385894 }, { "failureOrigin" : "source", "internalMessage" : "Source process exited with non-zero exit code 1", "externalMessage" : "Something went wrong within the source connector", "metadata" : { "attemptNumber" : 0, "jobId" : 9774, "connector_command" : "read" }, "stacktrace" : "io.airbyte.workers.internal.exception.SourceException: Source process exited with non-zero exit code 1\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:365)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithHeartbeatCheck$3(BufferedReplicationWorker.java:223)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n", "timestamp" : 1762403386006 } ]
    k
    • 2
    • 19
  • a

    Aviad Deri

    11/06/2025, 6:56 AM
    @kapa.ai trying to downgrade from 2.0.1 to 1.6.4 using the command "abctl local install --chart-version 1.6.4 --secret secrets.yaml --values values.yaml" this is the values.yaml: postgresql: enabled: false global: database: type: external # -- Secret name where database credentials are stored secretName: "airbyte-config-secrets" # e.g. "airbyte-config-secrets" # -- The database host host: "10.100.8.13" # -- The database port port: "5432" # -- The database name database: "airbyteqa" # -- The database user userSecretKey: "database-user" # e.g. "database-user" # -- The key within
    secretName
    where password is stored passwordSecretKey: "database-password" # e.g."database-password" temporal: extraEnv: - name: SKIP_DB_CREATE value: "true" - name: DBNAME value: temporal_qa - name: VISIBILITY_DBNAME value: temporal_visibility_qa - name: POSTGRES_TLS_ENABLED value: "false" - name: SQL_TLS_ENABLED value: "false" but getting the error: Starting Helm Chart installation of 'airbyte/airbyte' (version: 1.6.4) ERROR Failed to install airbyte/airbyte Helm Chart ERROR Unable to install Airbyte locally ERROR unable to install airbyte chart: unable to install helm: cannot patch "airbyte-abctl-connector-builder-server" with kind Deployment: Deployment.apps "airbyte-abctl-connector-builder-server" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"connector-builder-server"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "airbyte-abctl-cron" with kind Deployment: Deployment.apps "airbyte-abctl-cron" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"cron"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "airbyte-abctl-server" with kind Deployment: Deployment.apps "airbyte-abctl-server" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"server"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && failed to create patch: The order in patch list: [map[name:AUTO_SETUP value:true] map[name:DB value:postgres12] map[name:DYNAMIC_CONFIG_FILE_PATH value:config/dynamicconfig/development.yaml] map[name:POSTGRES_TLS_ENABLED value:false] map[name:POSTGRES_TLS_ENABLED value:true] map[name:POSTGRES_TLS_DISABLE_HOST_VERIFICATION value:true] map[name:SQL_TLS_ENABLED value:true] map[name:SQL_TLS_DISABLE_HOST_VERIFICATION value:true]] doesn't match $setElementOrder list: [map[name:AUTO_SETUP] map[name:DB] map[name:DYNAMIC_CONFIG_FILE_PATH] map[name:POSTGRES_SEEDS] map[name:DB_PORT] map[name:POSTGRES_USER] map[name:POSTGRES_PWD] map[name:POSTGRES_TLS_ENABLED] map[name:POSTGRES_TLS_DISABLE_HOST_VERIFICATION] map[name:SQL_TLS_ENABLED] map[name:SQL_TLS_DISABLE_HOST_VERIFICATION] map[name:SKIP_DB_CREATE] map[name:DBNAME] map[name:VISIBILITY_DBNAME] map[name:POSTGRES_TLS_ENABLED] map[name:SQL_TLS_ENABLED] map[name:AIRBYTE_INSTALLATION_ID]] && cannot patch "airbyte-abctl-worker" with kind Deployment: Deployment.apps "airbyte-abctl-worker" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"worker"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "airbyte-abctl-workload-api-server" with kind Deployment: Deployment.apps "airbyte-abctl-workload-api-server" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"workload-api-server"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "airbyte-abctl-workload-launcher" with kind Deployment: Deployment.apps "airbyte-abctl-workload-launcher" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"airbyte-abctl", "app.kubernetes.io/name":"workload-launcher"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
    k
    • 2
    • 31
  • s

    Sean Stach

    11/06/2025, 8:31 AM
    @kapa.ai for some reason ever time I restart abctl I have to chmod 777 ~/.airbyte/abctl and /.airbyte/abctl/data/airbyte-volume-db Would it be better to just do it for the entire .airbyte folder, how is it even reverting?
    k
    • 2
    • 4
  • c

    Christian Malan

    11/06/2025, 9:39 AM
    @kapa.ai is there currently a connector for azure event hubs? What would you recommend around this? Has there been any discussions regarding these?
    k
    • 2
    • 1
  • r

    Renu Fulmali

    11/06/2025, 9:43 AM
    Hi @kapa.ai I have upgraded the airbyte to the version 1.8.5 but now I am getting 502 error again and again on UI whenever there is issue with the source or destination configuration but it doesn't print the proper error on UI
    k
    • 2
    • 1
  • g

    Gaurav Jain

    11/06/2025, 11:17 AM
    @kapa.ai getting this error when creating mongo as a connection for source Failed to save MongoDb UAT due to the following error: errors.http.default
    k
    • 2
    • 1
  • n

    Neeraj N

    11/06/2025, 11:41 AM
    connection_payload = { "name": "Testing MySQL Database → S3", "sourceId": "a384af5c-d9d5-4c77-93bc-0245430c1813", "destinationId": "cee59ca5-dfb2-4dab-8fb8-14ab2a3278ab", "status": "active", "namespaceDefinition": "destination", "namespaceFormat": "${SOURCE_NAMESPACE}", "prefix": "", "nonBreakingChangesPreference": "propagate_columns", "geography": "", "syncCatalog": { "streams": [ { "stream": { "name": "card", "jsonSchema": { "type": "object", "properties": { "STATUS_ENUM_ID": {"airbyte_type": "integer", "type": "number"}, "LOADTIME": {"format": "date-time", "airbyte_type": "timestamp_with_timezone", "type": "string"}, "ACCOUNT_ID": {"airbyte_type": "integer", "type": "number"}, "BARCODE": {"type": "string"}, "MODIFIED_DATETIME": {"format": "date-time", "airbyte_type": "timestamp_without_timezone", "type": "string"}, "BATCH_ID": {"airbyte_type": "integer", "type": "number"}, "TEMPLATE_ID": {"airbyte_type": "integer", "type": "number"}, "CARD_NUMBER": {"type": "string"}, "TRACK_TWO": {"type": "string"}, "CREATED_DATETIME": {"format": "date-time", "airbyte_type": "timestamp_without_timezone", "type": "string"}, "PRINTED_CARD_NUMBER": {"type": "string"}, "TRACK_ONE": {"type": "string"}, "CARD_ID": {"airbyte_type": "integer", "type": "number"}, "MERCHANT_ID": {"airbyte_type": "integer", "type": "number"} } }, "supportedSyncModes": ["full_refresh", "incremental"], "sourceDefinedCursor": False, "defaultCursorField": [], "sourceDefinedPrimaryKey": [], "namespace": "mydb", "isResumable": False }, "config": { "syncMode": "incremental", "destinationSyncMode": "append", "cursorField": ["CARD_ID"], "selected": True, "suggested": False, "fieldSelectionEnabled": False, "selectedFields": [], "hashedFields": [], "mappers": [], "aliasName": "card", "primaryKey": [] } } ] }, "notifySchemaChanges": True, "backfillPreference": "disabled", "tags": [], "scheduleType": "basic", "scheduleData": { "basicSchedule": { "units": 24, "timeUnit": "hours" } }, "operationIds": [], "sourceCatalogId": "bbfcae7e-ddfc-4523-8891-270597a61bc9" } issue with payload without jsonSchema possible inagestion but i need cursorField
    k
    • 2
    • 1
  • k

    Kothapalli Venkata Avinash

    11/06/2025, 11:42 AM
    @kapa.ai, How to increase delay for each retry attempt?
    k
    • 2
    • 7
  • i

    Ilan Gresserman

    11/06/2025, 1:23 PM
    @kapa.ai does this gcs values are correct? global: # -- Service Account name override serviceAccountName: &service-account-name airbyte-admin # -- Edition; "community" or "enterprise" edition: community secretName: "" local: false cluster: type: hybrid # "control-plane" | "data-plane" | "hybrid" secretName: "" controlPlane: bootstrap: dataPlane: secretName: "" clientIdSecretKey: "" clientSecretSecretKey: "" dataPlane: controlPlaneAuthEndpoint: "" enterprise: # -- Secret name where an Airbyte license key is stored secretName: "" # -- The key within
    licenseKeySecretName
    where the Airbyte license key is stored
    licenseKeySecretKey: "" # -- The URL where Airbyte will be reached; This should match your Ingress host airbyteUrl: "" # Docker image config that will apply to all images. image: # Docker registry to pull platform images from, e.g. http://my-registry:8000/ registry: "" # Image tag to use for airbyte images. # Does not include non-airbyte images such as temporal, minio, etc. tag: "" # Docker image pull secret imagePullSecrets: [] # -- Auth configuration auth: # -- Whether auth is enabled enabled: false # -- Admin user configuration instanceAdmin: # -- The first name of the initial user firstName: "" # -- The last name of the initial user lastName: "" # -- The key within
    emailSecretName
    where the initial user's email is stored
    emailSecretKey: "" # -- The key within
    passwordSecretName
    where the initial user's password is stored
    passwordSecretKey: "" # -- SSO Identify Provider configuration; (requires Enterprise) identityProvider: # -- Secret name where the OIDC configuration is stored secretName: "" # -- The identity provider type (e.g. oidc) type: "" # one of: "oidc" or "generic-oidc" # -- OIDC configuration (required if
    auth.identityProvider.type
    is "oidc")
    #oidc: # # -- OIDC application domain # domain: "" # # -- OIDC application name # appName: "" # # -- OIDC application display name # displayName: "" # # -- The key within
    clientIdSecretName
    where the OIDC client id is stored
    # clientIdSecretKey: "" # # -- The key within
    clientSecretSecretName
    where the OIDC client secret is stored
    # clientSecretSecretKey: "" # -- Generic OIDC configuration (required if
    auth.identityProvider.type
    is "generic-oidc")
    #genericOidc: # clientId: "" # audience: "" # issuer: "" # extraScopes: "" # endpoints: # authorizationServerEndpoint: "" # jwksEndpoint: "" # fields: # subject: sub # email: email # name: name # issuer: iss internalApi: {} # -- Security configuration security: secretName: "" cookieSecureSetting: true cookieSameSiteSetting: Strict jwtSignatureSecretKey: "" dataPlane: clientIdSecretKey: "" clientSecretSecretKey: "" api: {} aws: assumeRole: accessKeyId: "" accessKeyIdSecretKey: "" secretAcessKey: "" secretAccessKeySecretKey: "" # -- Environment variables env_vars: {} # -- Secrets secrets: {} # -- CloudSQLProxy configuration cloudSqlProxy: enabled: false # -- Database configuration database: type: internal # "external" # -- Secret name where database credentials are stored # -- The database host host: "" # -- The database port port: # -- The database name name: "airbyte" # -- The database user user: "airbyte" # -- The key within
    secretName
    where the user is stored
    #userSecretKey: "" # e.g. "database-user" # -- The database password password: "Nagomi12345" # -- The key within
    secretName
    where the password is stored
    #passwordSecretKey: "" # e.g."database-password" dataplaneGroups: {} migrations: runAtStartup: true configDb: minimumFlywayMigrationVersion: "" jobsDb: minimumFlywayMigrationVersion: "" storage: secretName: "" # -- The storage backend type. Supports s3, gcs, azure, minio (default) type: gcs # default storage used # Minio # minio: # accessKeyId: minio # secretAccessKey: minio123 bucket: log: airbyte-audit-and-logging auditLogging: airbyte-audit-and-logging state: airbyte-audit-and-logging workloadOutput: airbyte-audit-and-logging activityPayload: airbyte-audit-and-logging # S3 # s3: # region: "us-east-2" # authenticationType: instanceProfile # S3 #s3: # region: "" ## e.g. us-east-1 # authenticationType: credentials ## Use "credentials" or "instanceProfile" # accessKeyId: "" # secretAccessKey: "" # GCS gcs: projectId: panda-poc-451212 credentialsJson: ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsCiAgInByb2plY3RfaWQiOiAicGFuZGEtcG9jLTQ1MTIxMiIsCiAgInByaXZhdGVfa2V5X2lkIjogIjRiNTA5NDg4ODlhNjAzMDUyZDgzOGU2Y2E0MDc4N2ZkZWNhYjU2MWQiLAogICJwcml2YXRlX2tleS..... credentialsJsonPath: /secrets/gcs-log-creds # Azure #azure: # # one of the following: connectionString, connectionStringSecretKey # connectionString: <azure storage connection string> # connectionStringSecretKey: <secret coordinate containing an existing connection-string secret> #awsSecretManager: # region: <aws-region> # authenticationType: credentials ## Use "credentials" or "instanceProfile" # tags: ## Optional - You may add tags to new secrets created by Airbyte. # - key: ## e.g. team # value: ## e.g. deployments # - key: business-unit # value: engineering # kms: ## Optional - ARN for KMS Decryption. googleSecretManager: projectId: panda-poc-451212 region: "us-central1" ## e.g.§ us-central1 credentialsSecretKey: gcp.json #azureKeyVault: # tenantId: "" # vaultUrl: "" # clientId: "" # clientIdSecretKey: "" # clientSecret: "" # clientSecretSecretKey: "" # tags: "" #vault: # address: "" # prefix: "" # authToken: "" # authTokenSecretKey: "" logging: level: info httpAccessLogsEnabled: false log4jConfigFile: "/opt/airbyte/log4j2-json.xml" connectorRegistry: seedProvider: remote connectorRollout: expirationSeconds: waitBetweenRolloutSeconds: waitBetweenSyncResultsQueriesSeconds: workloads: secretName: "" namespace: jobs images: workloadInit: "" connectorSideCar: "" containerOrchestrator: "" containerOrchestrator: secretName: "airbyte-config-secrets" secretMountPath: "/secrets/gcs-log-creds/gcp.json" dataPlane: secretName: "" secretMountPath: "" queues: check: [CHECK_CONNECTION] discover: [DISCOVER_SCHEMA] sync: [SYNC] pubSub: enabled: false topicName: "" resources: useConnectorResourceDefaults: true mainContainer: cpu: {} memory: {} check: cpu: {} memory: {} discover: cpu: {} memory: {} replication: cpu: {} memory: {} sidecar: cpu: {} memory: {} fileTransfer: storage: request: 5G limit: 5G featureFlags: secretName: "" client: configfile configfile: {} #launchdarkly: # key: "" java: opts: [] metrics: enabled: false step: "" otlp: enabled: false # -- The open-telemetry-collector endpoint that metrics will be sent to collectorEndpoint: "" statsd: enabled: false flavor: "" host: "" port: "" otel: resourceAttributes: {} collector: endpoint: "" exporter: name: otlp protocol: grpc timeout: 30000 metricExportInterval: 10000 # Jobs resource requests and limits, see http://kubernetes.io/docs/user-guide/compute-resources/ # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. jobs: resources: ## Example: ## requests: ## memory: 256Mi ## cpu: 250m # -- Job resource requests requests: {} ## Example: ## limits: ## cpu: 200m ## memory: 1Gi # -- Job resource limits limits: {} kube: _## JOB_KUBE_ANNOTATIONS_ # pod annotations of the sync job and the default pod annotations fallback for others jobs # -- key/value annotations applied to kube jobs annotations: {} _## JOB_KUBE_LABELS_ ## pod labels of the sync job and the default pod labels fallback for others jobs # -- key/value labels applied to kube jobs labels: {} _## JOB_KUBE_NODE_SELECTORS_ ## pod node selector of the sync job and the default pod node selector fallback for others jobs # -- Node labels for pod assignment nodeSelector: {} _## JOB_KUBE_TOLERATIONS_ # -- Node tolerations for pod assignment # Any boolean values should be quoted to ensure the value is passed through as a string. _## JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_SECRET_ # -- image pull secret to use for job pod mainContainerImagePullSecret: "" _## JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_POLICY_ # -- image pull policy to use for job pod mainContainerImagePullPolicy: "" localVolume: enabled: false scheduling: check: nodeSelectors: {} runtimeClassName: "" discover: nodeSelectors: {} runtimeClassName: "" isolated: nodeSelectors: {} runtimeClassName: "" sourceDeclarativeManifest: nodeSelectors: {} runtimeClassName: "" errors: reportingStrategy: logging #sentry: # dsn: "" topology: nodeSelectorLabel: airbyte/node-pool nodeSelectors: mainNodePool: main jobsNodePool: jobs quickJobsNodePool: quick-jobs temporal: secretName: "" cli: address: "" namespace: "" tlsCert: "" tlsCertSecretKey: TEMPORAL_CLOUD_CLIENT_CERT tlsKey: "" tlsKeySecretKey: TEMPORAL_CLOUD_CLIENT_KEY cloud: enabled: false host: "" namespace: "" clientCert: "" clientCertSecretKey: TEMPORAL_CLOUD_CLIENT_CERT clientKey: "" clientKeySecretKey: TEMPORAL_CLOUD_CLIENT_KEY sdk: rpc: timeout: 120s longPollTimeout: 140s queryTimeout: 20s stigg: secretName: "" apiKeySecretKey: "" datadog: enabled: false env: dev traceAgentPort: 8126 statsd: port: 8125 integrations: dbm: enabled: false propagationMode: full grpc: enabled: false clientEnabled: false serverEnabled: false googleHttpClient: enabled: false httpUrlConnection: enabled: false kotlinCoroutineExperimental: enabled: false urlConnection: enabled: false netty: enabled: false netty41: enabled: false customerio: secretName: "" apiKeySecretKey: "" tracking: enabled: true secretName: "" strategy: segment # one of: logging, segment segment: writeKey: "" writeKeySecretKey: "" micronaut: environments: ["k8s"] extraSelectorLabels: {} extraInitContainers: [] extraContainers: []
    k
    • 2
    • 4
  • r

    Rahul

    11/06/2025, 1:26 PM
    @kapa.ai getting below error, how I can fix it?
    postgres_logs_24_txt.txt
    k
    • 2
    • 6
  • z

    Zawar Khan

    11/06/2025, 3:00 PM
    @kapa.ai How to pass the
    timeout
    variable in the request? If we create a stream in our
    manifest.yml
    k
    • 2
    • 1
  • a

    Altan Sener

    11/06/2025, 3:25 PM
    will protobuf come to mssql source connector too
    k
    • 2
    • 4
  • m

    Michael Sonnleitner

    11/06/2025, 5:01 PM
    @kapa.ai We get this Error message for Shopify x BigQuery Connection: "Unable to persist the job Output, check the document store credentials."
    k
    • 2
    • 1
  • i

    Ilya Semenov

    11/06/2025, 5:02 PM
    I have oss airbyte 0.60. Is there a way to extract from Airbyte all enabled streams from specific connections, their sync mode, and their columns?
    k
    • 2
    • 4
  • c

    Clément Lombard

    11/06/2025, 5:07 PM
    @kapa.ai I have the airbyte OSS installed on a EC2 AWS machine and behind an ELB load balancer. When I try to setup a snowflake destination I get the following error: errors.http.default Why is that ? The instance has access to outside internet
    k
    • 2
    • 1
  • k

    Kaique G. Viana

    11/06/2025, 5:44 PM
    @kapa.ai explain to me this error Configuration check failed Could not connect with provided configuration. Error: State code: 08S01; Message: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    k
    • 2
    • 19
  • d

    Daniel Adler

    11/06/2025, 7:08 PM
    @kapa.ai I found the following url. https://airbyte.com/connectors/atlassian-marketplace-connector But when I login, I don't see Atlassian Marketplace Connector in the account. What's wrong with it?
    k
    • 2
    • 10
  • a

    Alex Danilin

    11/07/2025, 9:43 AM
    @kapa.ai I'm using Airbyte OSS and Elasticsearch-BigQuery connection. After the sync in the BigQuery table, I get the following in the _airbyte___meta:
    {"changes":[{"change":"NULLED","field":"domain","reason":"DESTINATION_SERIALIZATION_ERROR"}],"sync_id":31050}
    I the sync logs, I see the following:
    2025-11-06 16:42:31 platform WARN c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
    2025-11-06 16:42:57 platform WARN main c.k.j.j.JsonSchemaGenerator$MyJsonFormatVisitorWrapper(expectAnyFormat):725 Not able to generate jsonSchema-info for type: [simple type, class com.fasterxml.jackson.databind.JsonNode] - probably using custom serializer which does not override acceptJsonFormatVisitor
    2025-11-06 16:43:26 destination ERROR SLF4J(W): Class path contains multiple SLF4J providers.
    2025-11-06 16:43:26 destination ERROR SLF4J(W): Found provider [org.apache.logging.slf4j.SLF4JServiceProvider@2de8284b]
    2025-11-06 16:43:26 destination ERROR SLF4J(W): Found provider [org.slf4j.reload4j.Reload4jServiceProvider@396e2f39]
    2025-11-06 16:43:26 destination ERROR SLF4J(W): See <https://www.slf4j.org/codes.html#multiple_bindings> for an explanation.
    2025-11-06 16:43:26 destination ERROR SLF4J(I): Actual provider is of type [org.apache.logging.slf4j.SLF4JServiceProvider@2de8284b]
    2025-11-06 16:43:26 source WARN c.n.s.JsonMetaSchema(newValidator):278 Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
    2025-11-06 16:43:28 destination WARN main c.k.j.j.JsonSchemaGenerator$MyJsonFormatVisitorWrapper(expectAnyFormat):725 Not able to generate jsonSchema-info for type: [simple type, class com.fasterxml.jackson.databind.JsonNode] - probably using custom serializer which does not override acceptJsonFormatVisitor
    How can I check what the reason is for DESTINATION_SERIALIZATION_ERROR result in BigQuery?
    k
    • 2
    • 1
  • s

    Sachin Bedraman

    11/07/2025, 9:49 AM
    @kapa.ai abctl install is not able to bring up the local airbyte instance if I specify the --chart-version flag. What could be the reason ?
    k
    • 2
    • 1
  • g

    Gaurav Jain

    11/07/2025, 10:21 AM
    @kapa.ai Source is Mongo db and destination is Bigquery both are passed as connections but when making the connection between them it gives 504 error
    k
    • 2
    • 4
  • s

    Steven Ayers

    11/07/2025, 10:58 AM
    @kapa.ai The Oracle DB source is turning values that are a single space into NULLs. How do I disable this?
    k
    • 2
    • 4
  • a

    Agi Nyulfalvi

    11/07/2025, 11:01 AM
    @kapa.ai when did the CDC metadata field names change from having a trailing underscore to no underscore. Example: ab_cdc_cursor_ to ab_cdc_cursor
    k
    • 2
    • 1
  • r

    Rahul

    11/07/2025, 11:08 AM
    @kapa.ai If I stop the historical sync in between and start the sync again, will it start from start or from where it was ended?
    k
    • 2
    • 1
  • s

    Steven Ayers

    11/07/2025, 11:09 AM
    @kapa.ai I want to build the airbyte project locally. What JVM should I use?
    k
    • 2
    • 4
  • s

    Steven Ayers

    11/07/2025, 11:49 AM
    @kapa.ai are there any issues reported with Snowflake Destination 4.0.7 turning values that are just a space or empty into NULLs?
    k
    • 2
    • 1
  • s

    Slackbot

    11/07/2025, 1:43 PM
    This message was deleted.
    k
    s
    • 3
    • 4
  • s

    Slackbot

    11/07/2025, 3:02 PM
    This message was deleted.
    k
    • 2
    • 1
  • a

    Arsénio Costa

    11/07/2025, 3:53 PM
    @kapa.ai Is there any plans for a ZoHo CRM destination? (Reverse ETL)
    k
    • 2
    • 1
  • k

    Kaique G. Viana

    11/07/2025, 6:37 PM
    @kapa.ai ERROR main i.a.c.AirbyteConnectorRunnable(run):38 Failed class io.airbyte.cdk.load.write.WriteOperation operation execution. io.airbyte.cdk.TransientErrorException: Input was fully read, but some streams did not receive a terminal stream status message. If the destination did not encounter other errors, this likely indicates an error in the source or platform. Streams without a status message: [db] at io.airbyte.cdk.load.state.SyncManager.markInputConsumed(SyncManager.kt:116) ~[bulk-cdk-core-load-0.1.61.jar:?] at io.airbyte.cdk.load.state.PipelineEventBookkeepingRouter.close(PipelineEventBookkeepingRouter.kt:300) ~[bulk-cdk-core-load-0.1.61.jar:?]
    k
    • 2
    • 1
1...4445464748Latest