BentoML preserves the model input to the runner. I’d recommend to investigate from the model input and work backward. Maybe the input encoding is causing the slight difference in text recognition.
s
SYED ABDUL GAFFAR SHAKHADRI
09/21/2022, 1:36 PM
I am also facing the same issue. @Sean I investigated the model inputs, it is slightly different when BentoML picks it in. Any suggestions here.
s
Sean
09/21/2022, 11:19 PM
@SYED ABDUL GAFFAR SHAKHADRI I’d verify where the discrepancy was introduced. Likely it was caused by the input serialization over HTTP. I suggest looking into the IO descriptors used or conversations that would cause losing precision.
s
SYED ABDUL GAFFAR SHAKHADRI
09/22/2022, 2:52 AM
Thank you @Sean. I will take a look at that and get back to you.