<@U04SE163WC8> i would like to implement an API ca...
# ask-ai
d
@kapa.ai i would like to implement an API call that synch only yesterdays data however i have been having some issues with date configuration : Query Parameters:
start
:
{{ (now_utc() - timedelta(days=1)).strftime('%Y-%m-%dT00:00:00Z') }}
end
:
{{ now_utc().strftime('%Y-%m-%dT23:59:59Z') }}
Yet i still receive errors, any idea how to fix this ?
u
Based on the knowledge sources provided, it seems like you're trying to set up an incremental sync with Airbyte and having issues with date configuration. In Airbyte, the date format for incremental sync is typically set to
%Y-%m-%dT%H:%M:%SZ
. This is evident from the examples provided in the [Connector Builder UI documentation](https://docs.airbyte.io/connector-development/connector-builder-ui/incremental-sync#example) and the [Connector Builder UI tutorial](https://docs.airbyte.io/connector-development/connector-builder-ui/tutorial#adding-incremental-reads). However, the exact configuration might vary depending on the specific source connector you're using. For instance, in the case of Google Analytics 4, a user reported that there was no option to set up a custom cursor field or set the end date as yesterday's date. A custom solution was suggested by another user, which involved using a custom-built docker image ([source](https://discuss.airbyte.io/t/2591)). If you're still having issues, it would be helpful to know more about the specific source connector you're using and the exact error message you're receiving. This would allow for a more targeted troubleshooting of your issue.
d
@kapa.ai also, would there be a sync method that overwrite the whole of the previously extracted data ? so what i mean, is that i export today's data, and tomorrow's data will just overwrite/delete all the data and insert the new ?
u
A message from kapa.ai
u
Yes, Airbyte provides a sync method called Full Refresh - Overwrite that does exactly what you're asking for. This method retrieves all available data requested from the source and writes it in the destination, regardless of whether the data has previously been synced or not. In the Overwrite variant, new syncs will destroy all data in the existing destination table and then pull the new data in. Therefore, data that has been removed from the source after an old sync will be deleted in the destination table [[source](https://docs.airbyte.com/using-airbyte/core-concepts/sync-modes/full-refresh-overwrite)]. Here's an example of how the data in the destination changes before and after the sync: Before the sync:
Copy code
| Languages | | --- | | Python | | Java | | Bash |
New data in the source:
Copy code
| Languages | | --- | | Python | | Java | | Ruby |
After the sync (note how the old value of "bash" is no longer present):
Copy code
| Languages | | --- | | Python | | Java | | Ruby |
Please note that the mechanism by which a destination connector accomplishes the full refresh will vary from destination to destination. For certified database and data warehouse destinations, Airbyte recreates the final table each sync. This allows the previous sync's data to remain viewable by writing to a "final-table-tmp" location as the sync is running, and at the end dropping the old "final" table, and renaming the new one into place [[source](https://docs.airbyte.com/using-airbyte/core-concepts/sync-modes/full-refresh-overwrite-deduped#destination-specific-mechanism-for-full-refresh)].
d
thank you, do you think your ,,note" would apply to snowflake datawarehouse ?
@kapa.ai thank you, do you think your ,,note" would apply to snowflake datawarehouse ?
u
A message from kapa.ai
u
Yes, the note about the mechanism of full refresh does apply to Snowflake. As per the [Airbyte documentation](https://docs.airbyte.com/using-airbyte/core-concepts/sync-modes/full-refresh-overwrite-deduped#destination-specific-mechanism-for-full-refresh), for certified database and data warehouse destinations, which includes Snowflake, Airbyte recreates the final table each sync. This means that during the sync, data is written to a temporary location ("final-table-tmp"), and at the end of the sync, the old "final" table is dropped, and the new one is renamed into place. This approach ensures that the previous sync's data remains viewable for as long as possible and is only replaced once the new data is ready. Moreover, a [Github issue](https://github.com/airbytehq/airbyte/issues/40673) confirms that this feature has been implemented for Snowflake. The workflow for a full refresh of a table no longer involves deleting the already synced table as the first step. Instead, the table is replaced with the newly synced table at the end of the process. This improvement was made to prevent consumers from being left without any data to read for an extended duration if the sync takes a long time. So, if you're using Snowflake as your destination, you can use the Full Refresh - Overwrite sync mode to overwrite the entire table with the new data each time you perform a sync.