This message was deleted.
# ask-for-help
s
This message was deleted.
👀 1
j
l
b
Thanks! I still can't access my model variable though (the one I would get after a simple torch.load) and, therefore, I can't access the norm_params embedded into it. If I extract the model from my runner (by doing .models[0] on the runner) I can't seem to find the "norm_params" that I embedded into it after training the network (see pics for details):
Is there anyone who's had a similar problem or has any idea on how to solve this?:/
c
Hi @Brendon Kasi - runners are references to remote procedure running in a different python process, so accessing
.model
on a runner in the service code won’t work
What you could do is to write those code in a custom runner, where you can do all the regular python operations on a pytorch model instance
The links @larme (shenyang) provided above should help
also
runner.model
here is a reference to BentoML’s saved model object, not the pytorch model instance
note that in the service process, the pytorch model is not suppose to be loaded at all, it should only be loaded in the runner process
b
Thank you @Chaoyu! Think I'm missing something though: if I do all the typical Python operations (model load, device selection, inference function, etc) on top of the Runner abstraction then saving the model into BentoML's model store won't be needed anymore, right? In other words, if I manage to have the pytorch model variable within my custom runner class (meaning I did a
torch.load
inside the runner class), then I won't need to do
bentoml.pytorch.save_model
, right? Is there a way to access the pytorch model instance from the model store (instead of having to load the model inside the runner class)?
I eventually created a custom runner that loads the model and performs everything I need. By doing that I will bypass Bento's model store but I think that shouldn't be a problem. Thanks!