Slackbot
10/05/2022, 7:13 PMBo
10/05/2022, 7:14 PMBo
10/05/2022, 7:14 PMAmar Ramesh Kamat
10/05/2022, 7:15 PMBo
10/05/2022, 7:15 PMAmar Ramesh Kamat
10/05/2022, 7:16 PMimport tensorflow as tf
import bentoml
=
service = 'foo'
model_name = 'bar'
model_dir = "models/" + model_name
model = tf.saved_model.load(model_dir)
bentoml_model = bentoml.tensorflow.save_model(
service,
model,
signatures={"__call__": {"batchable": False}}
)
Here, I have a TF-2.x model on my local disk. I am loading it and then saving it via bento wrapperYakir Saadia
10/05/2022, 7:18 PMYakir Saadia
10/05/2022, 7:19 PMAmar Ramesh Kamat
10/05/2022, 7:20 PMimport numpy as np
import bentoml
import tensorflow as tf
from PIL import Image
from <http://bentoml.io|bentoml.io> import JSON
from bentoml._internal.types import JSONSerializable
model_name = 'bar'
# tried model = bentoml.model.get(model_name + ":latest") too
model = bentoml.tensorflow.get(model_name + ":latest")
runner = model.to_runner()
svc = bentoml.Service(model_name, runners=[runner])
@svc.api(input=JSON(), output=JSON())
async def predict(json_obj: JSONSerializable) -> JSONSerializable:
retur await runner.async_run([json_obj])
larme (shenyang)
10/05/2022, 7:20 PMAmar Ramesh Kamat
10/05/2022, 7:21 PMCurrently our tensorflow v2 implementation only accept list/tensor/ndarray inputs.That’s what I expected.
Amar Ramesh Kamat
10/05/2022, 7:22 PMThere are some nice ways to debug it, such as wrapping your inference function in another function to make sure there are no alterations by BentoML to the inputAre you referring to custom wrappers?
larme (shenyang)
10/05/2022, 7:24 PMAmar Ramesh Kamat
10/05/2022, 7:26 PMJim Rohrer
10/05/2022, 7:32 PMlarme (shenyang)
10/05/2022, 7:32 PMlarme (shenyang)
10/05/2022, 7:33 PMYakir Saadia
10/05/2022, 7:40 PMAre you referring to custom wrappers?Not exactly. You can take your already initialized model (when saving the models to BentoML), add a function to it that calls your actual predict or call functions. And then you can simply run code before and after the model inference. For example I had a response from my inference that wasn't compatible with BentoML's expectations so thats what I did:
def pytorch_predict(self, d):
return [x.pandas().xyxy for x in self(d).tolist()]
model.custom_predict = types.MethodType(pytorch_predict, model)
Amar Ramesh Kamat
10/06/2022, 12:19 AM