https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • d

    Dana Williams

    12/02/2025, 2:31 PM
    @kapa.ai 5 connectors failed to sync with error Warning from source: Check took too long. Check exceeded the timeout.
    k
    • 2
    • 4
  • c

    Christopher Vreugdenhil

    12/02/2025, 4:40 PM
    @kapa.ai, when using airbyte OSS, is the oauth source flow available? Is this only for Cloud / Entreprise?
    k
    • 2
    • 1
  • l

    Louis Demet

    12/02/2025, 4:58 PM
    Est ce qu'il est possible avec le connecteur Gorgias de récupérer les valeurs des custom fields pour chaque ticket ? On peut récupérer les tickets, la liste des custom fields dispo, mais je ne trouve pas l'association des deux
    k
    • 2
    • 4
  • a

    Albin Henneberger

    12/02/2025, 6:01 PM
    @kapa.ai - i am getting a gateway 504 error for my self hosted that began last week between airbyte and snowflake. i jsut focred the update to 4.0.30 and no idea why this issue is occuring. An unknown error occurred. (HTTP 504)
    k
    • 2
    • 1
  • d

    Dan Cook

    12/02/2025, 7:31 PM
    We are Airbyte Cloud customers and have built custom connectors using the Connector Builder UI. Question for @kapa.ai: can we migrate those custom connectors to a self-hosted version of Airbyte?
    k
    • 2
    • 1
  • s

    Sam Woodbeck

    12/02/2025, 7:39 PM
    @kapa.ai I'm using the Marketplace Freshservice connector https://docs.airbyte.com/integrations/sources/freshservice . I'd like to report on a ticket's acknowledged datetime and resolved datetime. I don't see any fields like this in the
    tickets
    stream. In the Freshservice documentation (https://api.freshservice.com/v2/#view_all_ticket) for the endpoint
    /api/v2/tickets
    , they mention that using the API parameter
    include=stats
    will embed additional details in the response, including
    resolved_at
    and
    first_responded_at
    datetime fields. Is there a way to configure the Freshservice connector to pass this API parameters? Or would I need to make a feature request for the connector owner or something?
    k
    • 2
    • 1
  • j

    Jared Parco

    12/02/2025, 7:43 PM
    @kapa.ai We are running into issues where a MySQL table isn’t capturing all of the CDC changes into Snowflake. We occasionally have to do a full refresh of certain streams. Why would this be occurring and where should we look to resolve this? We have these optional parameters on the MySQL source: useCursorFetch=true&defaultFetchSize=1000 Checkpoint Target Time Interval = 300 We are using MySQL v3.50.9 source and Snowflake v4.0.4 destination
    k
    • 2
    • 1
  • c

    Carmela Beiro

    12/02/2025, 8:02 PM
    @kapa.ai what happens with _ab_cdc_lsn when there is a refresh? Is it set to null?
    k
    • 2
    • 1
  • s

    soma chandra sekhar attaluri

    12/02/2025, 8:08 PM
    @kapa.ai is there any svc with name airbyte webapp svc in airbyte 2.0 installed using hek charts v2
    k
    • 2
    • 1
  • s

    soma chandra sekhar attaluri

    12/02/2025, 10:00 PM
    @kapa.ai How to disable secure cookie in abctl installation
    k
    • 2
    • 1
  • j

    Jeremy Plummer

    12/02/2025, 10:45 PM
    @kapa.ai Is there a way to cooy the terraform configuration of a connection setup in airbytte
    k
    • 2
    • 5
  • y

    Yuki Kakegawa

    12/02/2025, 11:15 PM
    Is there a way to "Refresh your data" only one of the tables configured in the sync?
    k
    • 2
    • 13
  • m

    Mauricio Pérez

    12/02/2025, 11:56 PM
    @kapa.ai Im having troubles setting the amazon seller partner connector, Im getting this error. 'Encountered an error while checking availability of stream Orders. Error: 400 Client Error: Bad Request for url: https://api.amazon.com/auth/o2/token', how can I solve this?
    k
    • 2
    • 1
  • s

    Shawna Zhao

    12/03/2025, 1:04 AM
    Cost of self hosted enterprise
    k
    • 2
    • 1
  • j

    Joshua Garza

    12/03/2025, 4:54 AM
    #C01AHCD885S I see these errors on a simple full refresh overwrite connection: 2025-12-03 045054,718 [io-executor-thread-4] ERROR i.a.c.s.e.h.UncaughtExceptionHandler(handle):33 - Uncaught exception java.lang.IllegalStateException: Duplicate key 54 (attempted merging values {StreamNameAndNamespace(name=core_facility_ratings, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_plantype, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=core_mean_prices_combined, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=core_facility_top_problems, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_facilities, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_facility_types, namespace=COPAY)=full_refresh} and {StreamNameAndNamespace(name=core_facility_ratings, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_plantype, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=core_mean_prices_combined, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=core_facility_top_problems, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_facilities, namespace=COPAY)=full_refresh, StreamNameAndNamespace(name=stage_facility_types, namespace=COPAY)=full_refresh}) at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:135) at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:182) at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at io.airbyte.commons.server.handlers.helpers.StatsAggregationHelper.hydrateWithStats(StatsAggregationHelper.kt:181) at io.airbyte.commons.server.handlers.helpers.StatsAggregationHelper.getJobIdToJobWithAttemptsReadMap(StatsAggregationHelper.kt:306) at io.airbyte.commons.server.handlers.ConnectionsHandler.getConnectionLastJobPerStream(ConnectionsHandler.kt:2188) at io.airbyte.server.apis.controllers.ConnectionApiController.getConnectionLastJobPerStream$lambda$18(ConnectionApiController.kt:281) at io.airbyte.server.apis.ApiHelper.execute(ApiHelper.kt:32) at io.airbyte.server.apis.controllers.ConnectionApiController.getConnectionLastJobPerStream(ConnectionApiController.kt:280) at io.airbyte.server.apis.controllers.$ConnectionApiController$Definition$Exec.dispatch(Unknown Source) at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invokeUnsafe(AbstractExecutableMethodsDefinition.java:461) at io.micronaut.context.DefaultBeanContext$BeanContextUnsafeExecutionHandle.invokeUnsafe(DefaultBeanContext.java:4438) at io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:272) at io.micronaut.web.router.DefaultUriRouteMatch.execute(DefaultUriRouteMatch.java:38) at io.micronaut.http.server.RouteExecutor.executeRouteAndConvertBody(RouteExecutor.java:465) at io.micronaut.http.server.RouteExecutor.lambda$callRoute$5(RouteExecutor.java:442) at io.micronaut.core.execution.ExecutionFlow.lambda$async$0(ExecutionFlow.java:92) at io.micronaut.core.propagation.PropagatedContext.lambda$wrap$3(PropagatedContext.java:232) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583)
    k
    • 2
    • 3
  • y

    Yosua Fernandus Simanjuntak

    12/03/2025, 9:23 AM
    @kapa.ai I used connection from API Builder to BigQuery, why the bigquery table store the data as json instead of columnar?
    k
    • 2
    • 2
  • v

    Vitor P. Miguel

    12/03/2025, 9:34 AM
    @kapa.ai i am building a custom conector using the UI. my source app only returns 25 objects per page, but I need it to return everything by default, the page = 1, if I want to pass to the next one, I need to add a query paramater =2, and so on, until no more data (with the default format) is returned. how can I set this?
    k
    • 2
    • 7
  • s

    Syed Hamza Raza Kazmi

    12/03/2025, 9:51 AM
    @kapa.ai issue of postgres to postgres cdc replication is it resolved ?
    k
    • 2
    • 4
  • h

    Horia

    12/03/2025, 10:04 AM
    @kapa.ai A sync I was performing from MongoDB to RedShift has been working for a while now. However suddenly I received the error below Failure in destination: Failed to drop table without the CASCADE option. Consider changing the drop_cascade configuration parameter Now I tried performing a full refresh but this did not work, what is the issue here and why did this suddenly start happening?
    k
    • 2
    • 1
  • k

    kapa.ai

    12/03/2025, 10:04 AM
    The error means the Redshift destination is trying to
    DROP TABLE
    but Redshift is refusing because that table has dependent objects (for example, views or foreign keys) and the connector is not allowed to use
    DROP ... CASCADE
    based on your current config. The new behavior is that, instead of silently failing, the destination now surfaces this as: > Failed to drop table without the CASCADE option. Consider changing the drop_cascade configuration parameter. [[Redshift ref](https://docs.airbyte.com/integrations/destinations/redshift#reference)] Why it started “suddenly”: - Recent Redshift destination versions added and changed how they respect the
    drop_cascade
    option, including for raw tables and resets. [[Redshift changelog](https://docs.airbyte.com/integrations/destinations/redshift#changelog)] - If your Airbyte platform or Redshift connector auto‑upgraded (as has happened to other users), connector logic around table drops may have changed even though your pipeline looked the same. [[Mongo perf forum](https://discuss.airbyte.io/t/1011); [Sudden failures forum](https://discuss.airbyte.io/t/2656)] Because the underlying Redshift table now has dependencies (or different DDL) a simple
    DROP TABLE
    no longer works, and full refresh alone doesn’t fix that. What you can do: 1. Enable “Drop tables with CASCADE” on the Redshift destination In the Redshift destination config, set
    drop_cascade
    to
    true
    . [[Redshift ref](https://docs.airbyte.com/integrations/destinations/redshift#reference)] - This lets the connector run
    DROP TABLE ... CASCADE
    so dependent objects will be dropped and the sync can recreate tables cleanly. - Be aware this can cause permanent loss of dependent objects (views, FKs, etc.), similar to the caveats documented for Postgres. [[Postgres dependent objects](https://docs.airbyte.com/integrations/destinations/postgres#creating-dependent-objects)] 2. Alternatively, remove or adjust dependencies yourself - Manually drop or adjust any views / constraints / sort keys that block drops. For example, a similar issue was fixed by removing a sort key on
    _airbyte_emitted_at
    before the destination could alter/drop it. [[Redshift sortkey issue](https://discuss.airbyte.io/t/3219)] 3. If you rely on dependent objects (views, etc.) - Use a tool like dbt to recreate them after each sync if you do enable
    drop_cascade
    , as recommended for other destinations. [[Postgres dependent objects](https://docs.airbyte.com/integrations/destinations/postgres#creating-dependent-objects)] If, after enabling
    drop_cascade
    and rerunning the sync, the problem persists, the knowledge sources don’t provide more specific Redshift DDL patterns that could be blocking the drop; in that case you’d need to inspect the exact Redshift table DDL and dependencies directly in your cluster.
  • t

    Tom

    12/03/2025, 10:11 AM
    @kapa.ai I’ve upgraded airbyte community version from 1.2 to 2.0.1 and spawning pods stopped working. Once I trigger test connector it starts, no job is created in kubernetes and then it returns 504. there is no error logs during that time even at debug verbose level
    k
    • 2
    • 2
  • k

    Konathala Chaitanya

    12/03/2025, 10:46 AM
    @kapa.ai
    Copy code
    Sync failed
    
    0 bytes
    |
    no records loaded
    |
    1h 40m 19s
    Warning from replication: Airbyte could not start the sync process or track the progress of the sync.
    
     
    See less
    message='Airbyte could not track the sync progress. Sync process exited without reporting status.', type='io.airbyte.workers.exception.WorkloadMonitorException', nonRetryable=false
    what is the error
    k
    • 2
    • 1
  • н

    Назар Тутин

    12/03/2025, 1:00 PM
    @kapa.ai there is airbyte running self hosted in k8s. we want to update to a new version using helm chart 2.0. How can we migrate all of the data from older to new? Is there any import/ export process. Change version: 1.4.0 -> 1.8.5
    k
    • 2
    • 1
  • j

    J Bob

    12/03/2025, 1:29 PM
    @kapa.ai I wish to use terraform to create a custom source connector, I have an exported builder yaml file.
    k
    • 2
    • 3
  • d

    Daniel de Cloet

    12/03/2025, 2:22 PM
    @kapa.ai On Airbyte OSS (2.0.1) deployed through the official Helm chart, I am trying to update a custom connector version (Docker container) through the API. Every time I send a PUT request to
    /api/public/v1/workspaces/<workspace ID>/definitions/sources/<source UUID>
    , the airbyte-server becomes unresponsive and is killed due to failing liveness checks. Is this a known problem?
    k
    • 2
    • 1
  • m

    Martin Brummerstedt

    12/03/2025, 2:58 PM
    @kapa.ai I am setting up the Jira Source. If I am setting up the API token without scopes then the connector work as expected but if I set it up with full read scopes then it fails. what scopes does the Jira source connector need to work?
    k
    • 2
    • 2
  • k

    Kevin Robert

    12/03/2025, 3:48 PM
    hello @kapa.ai trying to sync aircall data, I am facing this error 'GET' request to 'https://api.aircall.io/v1/contacts?page=201&amp;per_page=50' failed with status code '400' and error message 'BAD_REQUEST'
    k
    • 2
    • 1
  • k

    kanchalkumar karale

    12/03/2025, 4:02 PM
    Hi @kapa.ai, can you help me to setup airbyte setup locally so I test the code changes
    k
    • 2
    • 5
  • k

    Kuntal Basu

    12/03/2025, 4:32 PM
    what chart version to use to upgrade to airbyte 2.0
    k
    • 2
    • 2
  • j

    Jared Parco

    12/03/2025, 4:49 PM
    @kapa.ai how do we prevent airbyte from deleting fields when MySQL CDC logs a field has been removed
    k
    • 2
    • 3