This message was deleted.
# ask-for-help
s
This message was deleted.
y
I think you should be able to simply import your service.py file and do:
Copy code
runner.init_local()
result = classify(input_series)
since
classify
is essentially a function as well. If your
classify
function is async, you can do
Copy code
runner.init_local()
result = await classify(input_series)
l
Thank you for the response! Your suggestion is similar to the code shown in the documentation for debugging services:
Copy code
from service import svc

for runner in svc.runners:
    runner.init_local()

result = svc.apis["classify"].func([[5.9, 3, 5.1, 1]])
By doing this, I take advantage of the code of the service but not the input/output validation and casting defined in the API (in the code above for example the function takes a list as input instead of a ndarray). Is there a way around this problem? And if there is not, is it correct to use this code for batch predictions in production even though it is indicated as debugging code in the documentation?
y
I’m not sure if there’s an easy solution then but you should be able to do something with the numpyIO class in https://github.com/bentoml/BentoML/blob/main/src/bentoml/_internal/io_descriptors/numpy.py. Maybe create a own function for this. conversion or validation? Or check how the service API is using this class for validating the input?