Hi everyone, I´m trying my luck again. I have a My...
# ask-community-for-troubleshooting
l
Hi everyone, I´m trying my luck again. I have a MySql -> redshift connection with S3 streaming and basic normalization. I´m syncing 6 tables. All tables with incremental, history deduped. 5 tables sync perfectly, receiving records from 2018. But one table, which should have 10 million records, I receive only 400k and data back only to 2022-20-02. I´ve set new source, new destination.. and it´s the same
u
Hello laila ribke, it's been a while without an update from us. Are you still having problems or did you find a solution?
l
Hi, @Marcos Marx (Airbyte), I had a meeting with Andy few days ago and presented them this problem and all the test I have done. They said they will further investigate on that.
s
Hey @laila ribke were you able to get this issue resolved with your mysql to redshift connection? Were you able to sync all of the data?
l
Hi, still the same. Nothing I can do about it
s
Sorry to hear that, is there a github issue about this that I can take a look at? It’s a little strange why only 4% of the records are being synced.
l
Hi sorry, my mistake. We figured out the problem. I informed a coligue of yours about the solution. When we have set the Redshift destination with S3 staging, it´s better to add to file name pattern also milliseconds. The problem is that when there is a big amount of data, it send to S3 batches. When the filename pattern was only {sync_id}, each batch overwrote the previous one
s
Ah that makes sense. Thanks for clarifying, hopefully it helps some of our other users in the future.
l
Maybe you could modify the description
s
Sounds like a good suggestion, I’ll make an docs issues on our repo