little-umbrella-40933
05/31/2023, 3:02 PMlittle-umbrella-40933
05/31/2023, 3:03 PMgreen-salesmen-82630
05/31/2023, 3:07 PMechoing-lunch-7711
05/31/2023, 3:08 PMechoing-lunch-7711
05/31/2023, 3:12 PMlittle-umbrella-40933
05/31/2023, 3:16 PMlittle-umbrella-40933
05/31/2023, 3:18 PMQ : why isn’t the chunk size be by configuration and and a monitoring system to adapt it ?Do I understand you correctly: you would like to have a chunk size “dynamically” adjusted by the monitoring system?
little-umbrella-40933
05/31/2023, 3:21 PMQ : is configurable, but the example presented was hard-codedIt’s configurable in the code. The idea here is, that you can specify a chunk size, based on the queue, not just “one global number”. On the example in the slide, we put a INT value directly to just illustrate the concept, but you can pull that number from the configuration, or even env. variables, if that will help
echoing-lunch-7711
05/31/2023, 3:23 PMlittle-umbrella-40933
05/31/2023, 3:43 PMqueue:worker:start
command, that is working 1 min (and then restarts) is spawning multiple queue:task:start
commands that are processing concrete queues within that min, each with it’s own chunk_size. Your goal is to configure the batch size for a max throughput. In ideal use case scenario, you have multiple runs of queuetaskstart, that are fast enough, and don’t consume a lot of memory. If you have 1000msgs consumed in 30secs, but 200msgs in 5 secs, then 200 will lead to much better results! Of course if you set chunk to 1 msg - you have an overhead of running the “task” itself, that will level out all the advantages.little-umbrella-40933
05/31/2023, 3:48 PMlittle-umbrella-40933
05/31/2023, 3:49 PMechoing-lunch-7711
05/31/2023, 11:26 PM