https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • a

    Alex C

    06/26/2025, 7:04 AM
    @kapa.ai The Airbyte UI is experiencing a loading failure. While querying the endpoint list_by_user_id, I encountered a TypeError: Failed to fetch.
    k
    • 2
    • 10
  • a

    Alex C

    06/26/2025, 7:20 AM
    @kapa.ai I got an error :Setting attempt to FAILED because the workflow for this connection was restarted, and existing job state was cleaned.
    k
    • 2
    • 7
  • l

    Luke Alexander

    06/26/2025, 8:17 AM
    @kapa.ai after upgrading from 1.5.1 to 1.7.0 our slack notifications do not format the links to airbyte correctly, the url is correct but they are now missing
    https://
    so they are not formatted correctly, how to fix this?
    k
    • 2
    • 1
  • s

    Slackbot

    06/26/2025, 8:35 AM
    This message was deleted.
    k
    • 2
    • 1
  • j

    Jacob

    06/26/2025, 9:41 AM
    What's the best way to compute for shopify refunds?
    k
    • 2
    • 1
  • f

    Fabrizio Spini

    06/26/2025, 10:12 AM
    @kapa.ai why I have to grant RELOAD for mysql source as described https://docs.airbyte.com/integrations/sources/mysql#step-1-create-a-dedicated-read-only-mysql-user ?
    k
    • 2
    • 1
  • c

    Cenk Batman

    06/26/2025, 10:31 AM
    @kapa.ai I am using helm chart 1.7.0 any source creation fails at the “test the source” step with An unexpected error occurred. Please report this if the issue persists. (HTTP 500) an example log from the source-s3-check pod i.a.a.c.c.ClientConfigurationSupport(generateDefaultRetryPolicy$lambda$3):65 - Failed to call unknown. Last response: null java.io.IOException: HTTP error: 500 Internal Server Error at io.airbyte.api.client.ThrowOn5xxInterceptor.intercept(ThrowOn5xxInterceptor.kt:23) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) at dev.failsafe.okhttp.FailsafeCall.lambda$execute$0(FailsafeCall.java:117) at dev.failsafe.Functions.lambda$get$0(Functions.java:46) at dev.failsafe.internal.RetryPolicyExecutor.lambda$apply$0(RetryPolicyExecutor.java:74) at dev.failsafe.SyncExecutionImpl.executeSync(SyncExecutionImpl.java:187) at dev.failsafe.CallImpl.execute(CallImpl.java:33) at dev.failsafe.okhttp.FailsafeCall.execute(FailsafeCall.java:121) at io.airbyte.api.client.generated.WorkloadOutputApi.writeWorkloadOutputWithHttpInfo(WorkloadOutputApi.kt:376) at io.airbyte.api.client.generated.WorkloadOutputApi.writeWorkloadOutput(WorkloadOutputApi.kt:62) at io.airbyte.workers.workload.WorkloadOutputWriter.writeOutputThroughServer(WorkloadOutputWriter.kt:82) at io.airbyte.workers.workload.WorkloadOutputWriter.write(WorkloadOutputWriter.kt:65) at io.airbyte.connectorSidecar.ConnectorWatcher.handleException(ConnectorWatcher.kt:191) at io.airbyte.connectorSidecar.ConnectorWatcher.run(ConnectorWatcher.kt:83) at io.airbyte.connectorSidecar.$ConnectorWatcher$Definition.initialize$intercepted(Unknown Source) at io.airbyte.connectorSidecar.$ConnectorWatcher$Definition$InitializeInterceptor.invokeInternal(Unknown Source) at io.micronaut.context.AbstractExecutableMethod.invoke(AbstractExecutableMethod.java:166) at io.micronaut.aop.chain.MethodInterceptorChain.doIntercept(MethodInterceptorChain.java:285) at io.micronaut.aop.chain.MethodInterceptorChain.initialize(MethodInterceptorChain.java:208) at io.airbyte.connectorSidecar.$ConnectorWatcher$Definition.initialize(Unknown Source) at io.airbyte.connectorSidecar.$ConnectorWatcher$Definition.instantiate(Unknown Source) at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2335) at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3146) at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80) at io.micronaut.context.DefaultBeanContext.intializeEagerBean(DefaultBeanContext.java:3035) at io.micronaut.context.DefaultBeanContext.initializeEagerBean(DefaultBeanContext.java:2704) at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:2032) at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:323) at io.micronaut.context.DefaultBeanContext.configureAndStartContext(DefaultBeanContext.java:3342) at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:353) at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:225) at io.micronaut.runtime.Micronaut.start(Micronaut.java:75) at io.airbyte.connectorSidecar.ApplicationKt.main(Application.kt:18) at io.airbyte.connectorSidecar.ApplicationKt.main(Application.kt)
    k
    • 2
    • 1
  • k

    Konathala Chaitanya

    06/26/2025, 11:59 AM
    @kapa.ai "airbyte_internal"."events_common_raw__stream_requests" what os this table created by airbyte??
    k
    • 2
    • 1
  • k

    Konathala Chaitanya

    06/26/2025, 12:00 PM
    "airbyte_internal"."events_common_raw__stream_requests" @kapa.ai this table was created for which destination or source
    k
    • 2
    • 1
  • s

    Sudeesh Rajeevan

    06/26/2025, 12:07 PM
    s3 csv to redshift setup @kapa.ai
    k
    • 2
    • 6
  • c

    Calum McCrossan

    06/26/2025, 12:42 PM
    @kapa.ai I use the salesforce connector, including one of the default streams 'Task'. Since the 14th May the 'WhatId' field hasn't been pulling through from Salesforce - some other fields were removed on this date but this still exists in our instance on the same object with the same API name. Any idea why this could be happening?
    k
    • 2
    • 1
  • s

    Sudeesh Rajeevan

    06/26/2025, 12:49 PM
    java.util.concurrent.CompletionException: java.lang.RuntimeException: com.amazon.redshift.util.RedshiftException: ERROR: permission denied for schema @kapa.ai
    k
    • 2
    • 1
  • r

    Rafael Felipe

    06/26/2025, 1:10 PM
    microoft cdc, seems airbyte is not take the last record status to integrate at snowflake, it is causing data deprecated at the targert...is that a known error?
    k
    • 2
    • 1
  • r

    Rafael Felipe

    06/26/2025, 1:12 PM
    at microsoft sql server using cdc method, airbyte is not consider the last record change in the source to intgrate at snowflake target. is that a known issue?
    k
    • 2
    • 1
  • x

    Xavier Van Ausloos

    06/26/2025, 1:29 PM
    Hi team, I cannot use API for getting all workspaces.
    <http://localhost:8081/api/v1/workspaces/list>
    Got 404 not found error API works well for getting all connections (with basic auth):
    <http://localhost:8081/api/v1/connections/list>
    I am using Airbyte 1.7.1 deployed thanks to HELM @kapa.ai any idea ?
    k
    • 2
    • 1
  • k

    Kuntal Basu

    06/26/2025, 1:52 PM
    how to configure and fetch metrics for a open source deployment
    k
    • 2
    • 1
  • a

    Annika Maybin

    06/26/2025, 1:58 PM
    @kapa.ai I updated Airbyte to 1.7.0 and my connectors MySQL and Redshift to the newest versions and now my sync takes 3 times as long
    k
    • 2
    • 1
  • m

    Max

    06/26/2025, 3:55 PM
    @kapa.ai is there a way to set up an incremental stream using the previous successful sync's runtime for the filter query param, rather than a cursor field from the API response? I've got an API response from brevo's
    /contacts/{contact_id}/campaignStats
    that looks like this:
    Copy code
    [
      {
        "messagesSent": [
          {
            "campaignId": 1234,
            "eventTime": "2025-06-24T17:31:26.510-04:00"
          },
          {
            "campaignId": 337654,
            "eventTime": "2025-06-24T17:31:26.069-04:00"
          },
    ...
    Where each event type has its own key, meaning any event type could potentially have the most recent event, therefore I don't believe I could use the cursor path selector as I wouldn't know which of the json object keys to reference to find the latest
    eventTime
    . Therefore, I'm looking to use the last successful sync runtime instead.
    k
    • 2
    • 7
  • a

    Aviad Deri

    06/26/2025, 4:38 PM
    @kapa.ai what this error means? 2025-06-26 193455 info APPLY Stage: BUILD — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info APPLY Stage: CLAIM — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info Claimed: true for 22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check via API for 95b84c9d-9e6a-457b-9991-490ed9140f77 2025-06-26 193455 info APPLY Stage: LOAD_SHED — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info APPLY Stage: CHECK_STATUS — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info No pod found running for workload 22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check 2025-06-26 193455 info APPLY Stage: MUTEX — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info No mutex key specified for workload: 22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check. Continuing... 2025-06-26 193455 info APPLY Stage: LAUNCH — (workloadId=22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check) 2025-06-26 193455 info [initContainer] image: airbyte/workload-init-container:1.6.2 resources: ResourceRequirements(claims=[], limits={}, requests={}, additionalProperties={}) 2025-06-26 193502 info Attempting to update workload: 22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check to LAUNCHED. 2025-06-26 193502 info Pipeline completed for workload: 22f6c74f-5699-40ff-833c-4a879ea40133_e1c88ca4-c737-4413-af3b-6e65816f5741_0_check. 2025-06-26 193504 info 2025-06-26 193504 info ----- START CHECK ----- 2025-06-26 193504 info 2025-06-26 193604 info Connector exited, processing output 2025-06-26 193604 info Output file jobOutput.json found 2025-06-26 193604 info Connector exited with exit code 0 2025-06-26 193604 info Reading messages from protocol version 0.2.0 2025-06-26 193604 info INFO main i.a.i.d.b.BigQueryDestinationKt(main):565 Starting Destination : class io.airbyte.integrations.destination.bigquery.BigQueryDestination 2025-06-26 193604 info INFO main i.a.c.i.b.IntegrationCliParser$Companion(parseOptions):144 integration args: {check=null, config=/config/connectionConfiguration.json} 2025-06-26 193604 info INFO main i.a.c.i.b.IntegrationRunner(runInternal):130 Running integration: io.airbyte.integrations.destination.bigquery.BigQueryDestination
    k
    • 2
    • 7
  • n

    Nivedita Baliga

    06/26/2025, 5:21 PM
    hey @kapa.ai. In a postgres source connector, can is there a way to provide multiple hostnames in a way that the connector goes through the list and connects to the first available host?
    k
    • 2
    • 1
  • r

    Ryan Schwartz

    06/26/2025, 8:05 PM
    @kapa.ai what are best practices for backing up airbyte with an external database server?
    k
    • 2
    • 4
  • t

    Travis Liao

    06/27/2025, 2:27 AM
    Does oss version support audit log? @kapa.ai
    k
    • 2
    • 1
  • d

    Disha

    06/27/2025, 3:14 AM
    which github stream would give no of lines of code commited?
    k
    • 2
    • 1
  • u

    Usman Pasha

    06/27/2025, 6:01 AM
    iam getting the following error "Saved offset is not valid. Please reset the connection, and then increase oplog retention and/or increase sync frequency to prevent his from happening in the future. See https://docs.airbyte.com/integrations/sources/mongodb-v2#mongodb-oplog-and-change-streams for more details" even though i have set the oplog retention to 72hrs and the sync frequency is 5 mins even then i am getting the above the error
    k
    • 2
    • 3
  • k

    Kailash Bisht

    06/27/2025, 8:29 AM
    How to downgrade abctl
    k
    • 2
    • 10
  • n

    Neeraj N

    06/27/2025, 9:50 AM
    curl 'http://localhost:8000/api/v1/users/get' \ -H 'Accept: */*' \ -H 'Accept-Language: en-GB,en-US;q=0.9,en;q=0.8' \ -H 'Connection: keep-alive' \ -b 'ajs_anonymous_id=acb5734f-ac1b-4c44-9a39-5e286c755218' \ -H 'Origin: http://34.131.173.8:8000' \ -H 'Referer: http://34.131.173.8:8000/login' \ -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36' \ -H 'content-type: application/json' \ -H 'x-airbyte-analytic-source: webapp' \ --data-raw '{"userId":"00000000-0000-0000-0000-000000000000"}' \ --insecure
    k
    • 2
    • 1
  • d

    Daniel Sinewe

    06/27/2025, 9:56 AM
    I have a multi-tentant application and want my end-users to connect their applications with my custom OAuth. Can I have all clients with multiple sources in one workspace or should i have one workspace for each clients?
    k
    • 2
    • 1
  • l

    Lance Nehring

    06/27/2025, 11:24 AM
    @kapa.ai for the helm values, how can I set ENV vars to be passed to connector pods that are launched for checks and replication jobs?
    k
    • 2
    • 1
  • p

    Pablo Martin Calvo

    06/27/2025, 12:26 PM
    Failure in source: Saved offset is before replication slot's confirmed lsn. Please reset the connection, and then increase WAL retention and/or increase sync frequency to prevent this from happening in the future. See https://docs.airbyte.com/integrations/sources/postgres/postgres-troubleshooting#under-cdc-incremental-mode-there-are-still-full-refresh-syncs for more details. I'd like to know what's the ideal configuration for
    wal_keep_size
    and
    max_slot_wal_keep_size
    are, with what sync frequency
    k
    • 2
    • 1
  • a

    Ashish Pandita

    06/27/2025, 1:30 PM
    why does Airbyte change the order of columns in some cases?
    k
    • 2
    • 1