This message was deleted.
# ask-for-help
s
This message was deleted.
s
@Jiang might be the best to answer monitoring related questions.
j
Hi @Krisztián Szabó. 1. Yes. The monitoring API is designed for this case. It will be included in the next release. 2. yes 3. already here: https://docs.bentoml.org/en/latest/reference/metrics.html#metrics-api 4. bentoml supports exporting prometheus metrics. a. If your team are using bentoml and customize deployment solution: install prometheus stack or other monitor stack in your cluster, and just collect metrics from the
/metrics
endpoint. b. If your team are using yatai and want to have a one-key visualization: follow this https://docs.bentoml.org/projects/yatai/en/latest/observability/metrics.html#setup-steps 5. of course
k
Hi @Jiang, thank you for the response, I just read in the docs that bentoML is geared towards the deployment of ML and ML in production, rather than an experimentation platform. When I asked custom metrics I meant statistic kind of metric, as in how well is the algorithm performing (MSE, F1 score,..)
We are looking for something more like MLFlow (which was mentioned in your docs, thanks!)
j
@Krisztián Szabó Yeah the metrics I mentioned in 3, 4 is mainly traditional metrics. But we are also developing the bentoml monitoring API, which is designed for ML purpose (including the statistic metrics you mentioned). For out-of-box statistics metrics analyzing, bentoml is co-op with Arize.com to provide the end to end solution. It's very simple to setup.