This message was deleted.
# ask-for-help
s
This message was deleted.
r
Happy to open this PR and discuss it
a
Do you have a change in mind ? I faced the same the issue when I was trying out bentoml with fastapi where I had a middleware which was using
prometheus_client
. A work around or probably a solution for problem was to use
bentoml.metrics
instead of
prometheus_client
in the code, only drawback is not all the methods are implemented as part of bentoml metrics.
r
I think the docs were clear to use
bentoml.metrics
but it took a bit of time to make my services apply that Maybe a fallback approach with a warning instead of stopping
bentoml serve
completely?
c
@Ragy Haddad great suggestion! I vaguely remember this had to do with the default prometheus client having issue in a multi-process environment like BentoML. Note that in traditional FastAPI/Flask app, all processes are equivalent fork of each other, but BentoML differentiates each Runner process group separately than the serving logic, which makes the metrics/tracing implementation more challenging. I will look into it a bit more and see if it’s possible to support using
prometheus_client
directly tho
🙏 2
a
what methods that you want to use?
bentoml.metrics
is mostly a passthrough to the prometheus_client. Note that some of the functions won’t be supported in multiprocess mode, per prometheus_client documentation
👍 1
r
Thanks for the reply guys, my service is very simple, I was not really using any
bentoml.metrics
all what was happening is that I had two other services on my machine using prometheus, I understand in production this most likely wont be the case. But during active dev it is.
a
Can you send your service definition here if possible?