Dana Williams
12/02/2025, 2:31 PMChristopher Vreugdenhil
12/02/2025, 4:40 PMLouis Demet
12/02/2025, 4:58 PMAlbin Henneberger
12/02/2025, 6:01 PMDan Cook
12/02/2025, 7:31 PMSam Woodbeck
12/02/2025, 7:39 PMtickets stream. In the Freshservice documentation (https://api.freshservice.com/v2/#view_all_ticket) for the endpoint /api/v2/tickets, they mention that using the API parameter include=stats will embed additional details in the response, including resolved_at and first_responded_at datetime fields. Is there a way to configure the Freshservice connector to pass this API parameters? Or would I need to make a feature request for the connector owner or something?Jared Parco
12/02/2025, 7:43 PMCarmela Beiro
12/02/2025, 8:02 PMsoma chandra sekhar attaluri
12/02/2025, 8:08 PMsoma chandra sekhar attaluri
12/02/2025, 10:00 PMJeremy Plummer
12/02/2025, 10:45 PMYuki Kakegawa
12/02/2025, 11:15 PMMauricio Pérez
12/02/2025, 11:56 PMShawna Zhao
12/03/2025, 1:04 AMJoshua Garza
12/03/2025, 4:54 AMYosua Fernandus Simanjuntak
12/03/2025, 9:23 AMVitor P. Miguel
12/03/2025, 9:34 AMSyed Hamza Raza Kazmi
12/03/2025, 9:51 AMHoria
12/03/2025, 10:04 AMkapa.ai
12/03/2025, 10:04 AMDROP TABLE but Redshift is refusing because that table has dependent objects (for example, views or foreign keys) and the connector is not allowed to use DROP ... CASCADE based on your current config. The new behavior is that, instead of silently failing, the destination now surfaces this as: > Failed to drop table without the CASCADE option. Consider changing the drop_cascade configuration parameter. [[Redshift ref](https://docs.airbyte.com/integrations/destinations/redshift#reference)] Why it started “suddenly”: - Recent Redshift destination versions added and changed how they respect the drop_cascade option, including for raw tables and resets. [[Redshift changelog](https://docs.airbyte.com/integrations/destinations/redshift#changelog)] - If your Airbyte platform or Redshift connector auto‑upgraded (as has happened to other users), connector logic around table drops may have changed even though your pipeline looked the same. [[Mongo perf forum](https://discuss.airbyte.io/t/1011); [Sudden failures forum](https://discuss.airbyte.io/t/2656)] Because the underlying Redshift table now has dependencies (or different DDL) a simple DROP TABLE no longer works, and full refresh alone doesn’t fix that. What you can do: 1. Enable “Drop tables with CASCADE” on the Redshift destination In the Redshift destination config, set drop_cascade to true. [[Redshift ref](https://docs.airbyte.com/integrations/destinations/redshift#reference)] - This lets the connector run DROP TABLE ... CASCADE so dependent objects will be dropped and the sync can recreate tables cleanly. - Be aware this can cause permanent loss of dependent objects (views, FKs, etc.), similar to the caveats documented for Postgres. [[Postgres dependent objects](https://docs.airbyte.com/integrations/destinations/postgres#creating-dependent-objects)] 2. Alternatively, remove or adjust dependencies yourself - Manually drop or adjust any views / constraints / sort keys that block drops. For example, a similar issue was fixed by removing a sort key on _airbyte_emitted_at before the destination could alter/drop it. [[Redshift sortkey issue](https://discuss.airbyte.io/t/3219)] 3. If you rely on dependent objects (views, etc.) - Use a tool like dbt to recreate them after each sync if you do enable drop_cascade, as recommended for other destinations. [[Postgres dependent objects](https://docs.airbyte.com/integrations/destinations/postgres#creating-dependent-objects)] If, after enabling drop_cascade and rerunning the sync, the problem persists, the knowledge sources don’t provide more specific Redshift DDL patterns that could be blocking the drop; in that case you’d need to inspect the exact Redshift table DDL and dependencies directly in your cluster.Tom
12/03/2025, 10:11 AMKonathala Chaitanya
12/03/2025, 10:46 AMSync failed
0 bytes
|
no records loaded
|
1h 40m 19s
Warning from replication: Airbyte could not start the sync process or track the progress of the sync.
See less
message='Airbyte could not track the sync progress. Sync process exited without reporting status.', type='io.airbyte.workers.exception.WorkloadMonitorException', nonRetryable=false
what is the errorНазар Тутин
12/03/2025, 1:00 PMJ Bob
12/03/2025, 1:29 PMDaniel de Cloet
12/03/2025, 2:22 PM/api/public/v1/workspaces/<workspace ID>/definitions/sources/<source UUID>, the airbyte-server becomes unresponsive and is killed due to failing liveness checks.
Is this a known problem?Martin Brummerstedt
12/03/2025, 2:58 PMKevin Robert
12/03/2025, 3:48 PMkanchalkumar karale
12/03/2025, 4:02 PMKuntal Basu
12/03/2025, 4:32 PMJared Parco
12/03/2025, 4:49 PM