Hey team :wave: Following up on our Meet-up’s topi...
# help
l
Hey team 👋 Following up on our Meet-up’s topic: Queues in Spryker - please ask your questions in the thread 😉
🙌🏼 1
🙌 2
@echoing-lunch-7711 @adorable-wolf-29357 (@green-salesmen-82630?)
g
as of now,spryker doesn,t support adding digital products/downloadable products..so is there something like such in the pipeline and what would be the best approach to add this feature at project level in spryker
e
hello !!!
👋 3
it has been any problem on clustering rabbitmq instances and the consumers on spryker ?
l
@echoing-lunch-7711 what do you mean - is that about possibility to RMQ cluster? Could you pls rephrase or clarify you need/issue?
@echoing-lunch-7711
Q : why isn’t the chunk size be by configuration and and a monitoring system to adapt it ?
Do I understand you correctly: you would like to have a chunk size “dynamically” adjusted by the monitoring system?
Q : is configurable, but the example presented was hard-coded
It’s configurable in the code. The idea here is, that you can specify a chunk size, based on the queue, not just “one global number”. On the example in the slide, we put a INT value directly to just illustrate the concept, but you can pull that number from the configuration, or even env. variables, if that will help
e
• i was just asking if may or may not expect any hicup regarding using rabbitmq clusters • yes, i would rather have a monitoring system that automaticaly adapt the chunk size to make the system more resilient, even if the change would raise some sysadm notification
l
1. Hmm, not that I’m aware of, at least! But still, the question is pretty broad - if you have a specific use case or issue that you expect - I could clarify 2. This is really a nice idea, but also a bit tricky. In short, how it works:
queue:worker:start
command, that is working 1 min (and then restarts) is spawning multiple
queue:task:start
commands that are processing concrete queues within that min, each with it’s own chunk_size. Your goal is to configure the batch size for a max throughput. In ideal use case scenario, you have multiple runs of queuetaskstart, that are fast enough, and don’t consume a lot of memory. If you have 1000msgs consumed in 30secs, but 200msgs in 5 secs, then 200 will lead to much better results! Of course if you set chunk to 1 msg - you have an overhead of running the “task” itself, that will level out all the advantages.
Regarding the “automatic chunk size, based on the monitoring metrics”: We have that idea in mind, but for now it’s not planned yet as we are yet investigating the tech. feasibility. Memory consumption is a not a static thing usually, since it depends on the amount of data your entity is holding. You may have some products consist of few KB of data in total, but some of them could have tens or hundreds of MB.. That is one of the main challenge. Any way, as soon as we find a nice solution here, we will make it loud, no worries 😉
cc @echoing-lunch-7711 does that answer your question?
e
yes thank you
👍 1