Our Spryker PaaS running on docker in AWS is very ...
# help
v
Our Spryker PaaS running on docker in AWS is very slow, response times can be +10s and CPU utilization of glue container is 100% with 10 users. I was able to fix the problem in my local spryker by running command "composer dumpautoload -o". Loading times and CPU utilization decreased 90% with the autoload. I tried to debug the use of autoload in my project in production and seems that autoload command is run and autoload files are generated succesfully. But still there is not any changes in very bad performance. Can you please give me some clue what might be wrong in very bad performance in Spryker app in production?
👀 1
a
Hey Patrik 👋 Slow performance is never nice... 🦥 I don't see anything weird in the AWS status dashboard, so this might be something specific to your environment. In that case I think the fastest resolution would be to get the Support Team involved by creating a ticket for this.
If anyone else is experiencing unusually slow performance on their production site: let us know, that might help our teams to find the issue faster.
w
Take a look into newrelic and check the slow transactions there. If you have enough traffice (a few requests per hour) there should be at least one trace available which could show you the performance issue
🙏 1
Is the performance degredation related to a deployment or did it appeared without any code change in place? On your local env surely autoloading will be the most time consuming in the traces, one of the options is to dump the autoloader as you did and profile the request again for example with xdebug to analyze what part's are still slow.
🙏 1
v
Thanks for your help. The local performance improvement appeared without any code change. Just autoload command. I found a good help from newrelic logs related to translations. Hopefully we will found solution from that and system speed will improve.
👍 1
w
Do you use the redis caching to use multi-get on pages that have already been accessed (https://docs.spryker.com/docs/scos/dev/back-end-development/client/use-and-configure-redis-as-a-key-value-storage.html#use-and-configure-redis-cache)? If not you will see a lot of get's in newrelic for redis which will kill your performance as every single get to redis in PAAS will have ~3ms overhead due to network communication. Sadly there is no cache warmer for the redis cache, but at least only the first request per page will run into uncached pages
v
Redis get takes also pretty big part of the response too. But thanks, I gotta ask our team how we use redis mget.
w
The previous link to the documentation is what will save you a big part of the issue. Another fix is in https://github.com/spryker/products-rest-api/blob/master/src/Spryker/Glue/Products[…]roductAttribute/AbstractProductAttributeTranslationExpander.php to assemble all keys first and get them att once from the glossary/translations.
🙏 1
a
It might be the first thing you checked already, but since you mentioned composer autoload, you can also check your deploy configuration for the affected environment and if you have defined the composer autoload options there: https://docs.spryker.com/docs/scos/dev/the-docker-sdk/202212.0/deploy-file/deploy-file-reference-1.0.html#composer
w
Given the newrelic screenshot I'm pretty sure the performance issue is not due to composer autoloading, while you suggestion still is a good one in general. The trace clearly accounts one part of the code, translating attribute keys, for ~80%, or even ~98% if you add the redis get's and reading from redis too.
If autoloading would have been the issue here, you would see a lot of
file_exists
calls that account for a big potion of the performance issue. That's normally the first sign to look in traces to identify autoloading issues. This can be fixed with using a autorative class maps from composer. If not
file_exists
but file reads are the most time consuming it's usually also due to autoloading, but more often the root cause is a slow hard disk. This is often a sign of not properly configured PHP op caches.
a
Agree, my comment was just for the reference purposes in case of composer specific question. We had issues with bad catalog-search performance as well. The issue was also mostly connected with attribute translations, but in our case with elasticsearch aggregations as well.
👍 1
w
but in our case with elasticsearch aggregations as well
How did you solved this issue? We have the same, a slow search, up to 3 seconds, due to Spryker sending all facets along, which causes a huge query and scanning a lot of indizes in ES.
a
We ended up implementing a separate endpoint for getting the products without filters/facets. So on PCP we are basically doing 2 calls, one for products and one for available filters.
👍 1