So i have a postgres database, that has partitione...
# ask-community-for-troubleshooting
r
So i have a postgres database, that has partitioned tables, running incremental dedup replication. These tables should never get new rows, since i replicated these tables during an initial replication., after the data they are paritoned on. So every time normaliztion runs, these tables take much longer to run eventhough they have no new data, so no diff, but if some manual changes are done, I'd like them to replicate.... So now i need some way minimize the high normalization run time after initial replication without any actual changes? Is there some mock data i could push to snowflake or edit data there to accomplish this? I don't want to make changes directly to the db if i can avoid it. Or am i approaching this wrong and just need to drop the tables from the sync job after initial replciation and not remove the data?
u
Hi Robert, is this using the incremental dedup sync mode? If so, Airbyte needs to go through the whole table to check for duplicates, and there is no way to speed that up at this time.
r
the slowness is in normalization though not replication
u
Could you please confirm the sync mode you are using?
r
incremental debup
u
So in incremental dedup sync mode Airbyte goes through every record to check if it's been modified or not, and this is what takes so much time.