Jian Zhu
05/28/2024, 2:29 AMNo stats have been persisted for job 8 stream StreamNameAndNamespace
kapa.ai
05/28/2024, 2:30 AM_airbyte_tmp_<stream_name>
. Example log entries: plaintext Write config: WriteConfig{streamName=ad_creative_analytics, namespace=null, outputSchemaName=public, tmpTableName=_airbyte_tmp_rji_ad_creative_analytics, outputTableName=_airbyte_raw_ad_creative_analytics, syncMode=append}
Source: [GitHub Issue #38534](https://github.com/airbytehq/airbyte/issues/38534) 3. Check for Casing Issues: - There might be issues related to the casing of table names. Ensure that the table names in the source and destination match in terms of casing. A mismatch can cause the sync job to fail. Example error: plaintext java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: relation "public.public_raw__stream_Job" does not exist
Source: [GitHub Discussion #34423](https://github.com/airbytehq/airbyte/discussions/34423) If these steps do not resolve the issue, you might need to provide more specific logs or details about your configuration for further assistance. Additionally, you can check the [Airbyte documentation](https://docs.airbyte.com/) for more detailed troubleshooting steps. If you need further help, you can also open a new issue or discussion on the [Airbyte GitHub repository](https://github.com/airbytehq/airbyte) to get support from the community and maintainers.Jian Zhu
05/28/2024, 8:50 AMkapa.ai
05/28/2024, 8:50 AMkapa.ai
05/28/2024, 8:50 AMsql select * from table where cursor_field > 'last_sync_max_cursor_field_value'
Source: [Airbyte Documentation - Known Limitations](https://docs.airbyte.com/using-airbyte/core-concepts/sync-modes/incremental-append-deduped#known-limitations) 2. Inclusive Cursors: - Airbyte provides an at-least-once delivery guarantee, meaning it is acceptable for sources to re-send some data when run incrementally. This is particularly relevant when a source's cursor is not very granular. For example, if a cursor field has the granularity of a day, running the source twice in the same day may result in duplicate records. - By convention, sources should prefer resending data if the cursor field is ambiguous. Source: [Airbyte Documentation - Inclusive Cursors](https://docs.airbyte.com/using-airbyte/core-concepts/sync-modes/incremental-append-deduped#inclusive-cursors) ### Practical Issues 1. Destination Compatibility: - The Incremental Append + Deduped mode is only available for a few destinations. If you are using a destination that does not support this mode, you will not be able to use it. - Example: A user reported that this mode was not available for MySQL as a destination. Source: [GitHub Issue #33242](https://github.com/airbytehq/airbyte/issues/33242) 2. Performance Concerns: - Scanning the full table to figure out duplicates can be time-consuming and may lead to performance issues, especially if the replication slot accumulates a massive amount of data. - Example: A user expressed concerns about the replication slot accumulating data to a dangerous level due to the time taken for each batch. Source: [Airbyte Forum Post](https://discuss.airbyte.io/t/1702) ### Recommendations - Ensure Proper Cursor Updates: Make sure that any modifications to records also update the cursor field to ensure that the incremental sync picks up the changes. - Check Destination Compatibility: Verify that your destination supports the Incremental Append + Deduped mode. - Monitor Performance: Keep an eye on the performance and disk space usage, especially if you are dealing with large datasets. If you encounter specific issues or need further assistance, consider opening a new issue or discussion on the [Airbyte GitHub repository](https://github.com/airbytehq/airbyte) to get support from the community and maintainers.