This message was deleted.
# ask-for-help
s
This message was deleted.
👀 1
k
The flask app looks something like this:
Copy code
app = flask.Flask(__name__, static_url_path="", static_folder="static")

@app.route("/v1/analyse", methods=["POST"])
def analyse():
    """
    Analyses an image

    Expects POST parameter 'image'
    """
    global model
    # get uploaded photos
    uploaded_files = flask.request.files.getlist("image")
    out_json = {
        # API response scheme
        # .......
        "predictions": []
    }
    with tempfile.TemporaryDirectory() as tmp:
         # store images temporarily
    # read image and do classification / prediction
    out_json["predictions"].append(data)
    return flask.jsonify(out_json)
 
if __name__ == "__main__":
    load_model(os.getenv("MODEL_FOLDER_PATH", "../model"))
    app.run(
        host=os.getenv("HOST", "0.0.0.0"),
        port=int(os.getenv("PORT", "9000")),
        use_reloader=False,
    )
l
Hi Krisztian, BentoML can be bundle with a Flask app without changing the Flask app. The following docs and example may provide more information: https://docs.bentoml.org/en/latest/guides/server.html#bundle-wsgi-app-e-g-flask https://github.com/bentoml/BentoML/tree/main/examples/custom_web_serving/flask_example
k
The important part would be to call the model already in that docker image, in these examples it needs a runner, can I call the model from the docker image?
l
If you want to reuse the existing docker image and load the model directly from the docker image, then I guess this example will be helpful: https://github.com/bentoml/BentoML/blob/main/examples/custom_runner/torch_hub_yolov5/service.py#L8 Here you define a custom Runnable and load the model from disk, then use the custom Runnable to generate a Runner like:
yolo_v5_runner = bentoml.Runner(Yolov5Runnable)