This message was deleted.
# ask-for-help
s
This message was deleted.
πŸ‘€ 1
l
Hi Jay, I simplified your code a little bit and make a complete
service.py
example below:
Copy code
import bentoml
from <http://bentoml.io|bentoml.io> import JSON

class MyRunnable(bentoml.Runnable):
    SUPPORTED_RESOURCES = "cpu"
    SUPPORTS_CPU_MULTI_THREADING = True

    def __init__(self):
        self.warmup()

    def warmup(self):
        print("warming up...")
        return 1

    @bentoml.Runnable.method(batchable=False)
    def echo(self, req: dict):
        return req


runner = bentoml.Runner(MyRunnable)

svc = bentoml.Service("my_demo", runners=[runner])

@svc.api(input=JSON(), output=JSON())
async def echo(dic):
    ret = await runner.echo.async_run(dic)
    return ret
This
servce.py
has no problem to run the warming up codes. I wonder if
Predictor
has some thing to do with the error. Could you share more detail about this
Predictor
?
u
The predictor code is as follows, and the problematic part is getting the loop from asyncio.
Copy code
class Predictor():
    def predict(self):
        ...
        loop = asyncio.new_event_loop() # or asyncio.get_event_loop()
        asyncio.set_event_loop(loop)

        tasks = [
            self.async_func1(),
            self.async_func2(),
        ]
        (
            res1, 
            res2,
        ) = loop.run_until_complete(
            asyncio.wait_for(asyncio.gather(*tasks), timeout=0.8)
        )
        ...
        return res
l
I wonder if you should use async in side predict. Async is mainly for blocking operations. I think predict tasks should be cpu heavy tasks hence you may want to run them using multithreading
u
During predict, the process of getting data from redis was required, so it was implemented asynchronously. Is this the wrong way?
l
If the bottleneck of
predict
is redis querying then maybe it make sense to do it asynchronously. I need more time to investigate this problem.
u
Thank you for the reply. If I find the cause, I will share it πŸ™‚