This message was deleted.
# ask-for-help
s
This message was deleted.
🏁 1
l
Hi Elior, I wonder if our new batch inference function is helpful in ths case: https://docs.bentoml.org/en/latest/reference/batch.html otherwise, can you just
init_local
a runner, then use
getattr(runner, "my_endpoint").run(...)
?
c
@Elior Cohen
Copy code
svc = bentoml.load("my_bento:latest")
result = svc.apis['classify'].func( input_df )
e
@Chaoyu this is exactly what we do, but between the first and the second line we iterate over the runners and
init
them - is it okay if we never init them?
@larme (shenyang) I can't use that in my use case
c
@Elior Cohen it's best to call
init_local
for all required runners in this case
e
@Chaoyu so that is what we are doing right now, I was wondering if I should init them differently - if there is a way to access the service in production mode through the SDK
l
In production mode we are separating the api process and runner process. But inside runner process we still use
init_local
to init the runners.
So in your use case, I think there's little difference because I think you are using
svc
directly like a ML model.
e
Okay, thanks