is there a way to increase the batch size of recor...
# feedback-and-requests
p
is there a way to increase the batch size of records read (currently 1000) increasing the batch size would reduce the number of network calls
Copy code
2021-10-05 19:38:22 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 67000
2021-10-05 19:38:22 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 68000
2021-10-05 19:38:23 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 69000
2021-10-05 19:38:23 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 70000
2021-10-05 19:38:24 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 71000
When dealing with tables that have millions of records, it will have a good performance effect to increase batch size
From what Source @Philippe Boyd?
u
MSSQL for instance
u
could be any SQL source actually
u
Philippe there is a issue https://github.com/airbytehq/airbyte/issues/4314 to implement this to jdbc sources. Probably a feature the team will focus after the cloud launch, aka next few weeks 😄