Hello, we are testing the amazon ads source connec...
# feedback-and-requests
j
Hello, we are testing the amazon ads source connector with a postgres destination. Is "incremental deduped + history" the proper sync mode to use for the *_report__stream tables with "reportDate" as the primary key?
y
The incremental dedup + history will allow you to already dedup data and keep historical, as the sync name said. Depends on how do you want to have your data in destination.
How do we need to configure it with regards to the primary key? Do we need to use "reportDate"? Makes most sense to me...
l
the primary key must be the fields you want to group by/dedup the data
a
I tried something like this, I used the incremental dedup + history, After updating some records from the source-db, then i clicked o the sync, yet nothing was updated in the destination db
a
This connector does not work properly. Configured it to dedup by reportDate and yet, I just had to truncate a raw table in Postgres that grew to 115 GB after just a few syncs. I am a bit lost.
m
Good morning, I'd like to bump this thread. We are running several connectory successfully except from the Amazon Ads one. We believe that we configured it properly (incremental dedup + history, reportDate as the primary key). Yet deduplication does not work and sync results keep getting appended, which resulted in a 115GB table after a few days of syncing. What does the Airbyte Team @[DEPRECATED] Marcos Marx think?