This message was deleted.
# ask-for-help
s
This message was deleted.
p
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my env.
s
This is something we've been working on, but it's not currently possible, sadly.
e
Totally with you @Paulo Eduardo Neves. It's awful that you have to install your dependencies in CI first and then during
bento containerize
again! I think we're installing the same requirements 2-3 times in our setup (which I'm working on right at this moment 🤣)
😅 1
And we're using OpenCV so we have to
apt-get install libcurl
in and out of the Docker image as well
p
My real problem is the OS dependencies that a non admin user can't install in the host OS. Using the same requirements.txt file twice would be fine since there would be no duplication. I think I'll skip
bentoml containarize
all together and create a docker file with everything. Inside I'd do a
bentoml build && bentoml serve
. What would I miss with this solution?
👀 1
j
that’s how i did @Paulo Eduardo Neves actually, just have my own base image with all the dependencies, then run
bentoml build && serve
in the base image… didn’t make it either to use the
containerize
c
Does bentoml do anything custom to the docker container that it builds when you use something like aws-sagemaker-deploy? Because I guess that's what you'd miss if you just did
bentoml build && bentoml serve
inside the container
@sauyon is there a proposed design for building bento directly into a docker container? Happy to try to contribute some code if there is one
a
Hi all, I wrote down a quick one pager about the current problem at hands and potential solution for it. https://github.com/bentoml/BentoML/issues/3580 Feel free to comment and add more suggestion.
👍 1
j
I was thinking about having the model training pipeline do a bentoml.build with a specification on bento_store to GCS and then have the CI do a
bentoml import && bentoml containerize 🤔
(maybe
bentoml serve
to have the CI actually do a test on the routes) and subsequent
docker push <container name
to our registry (not using yatai) finally deploy our kubernetes manifests etc
Finally got our CI and the new bentoml build pattern I mentioned above to work: • CI deploys a training pipeline • The final step in the pipeline creates the bento service artifact and pushes it to GCS • Afterwards another CI job pulls the built bento
bentoml import <..> &&
bentoml containerize
And we push the new image and redeploy out new service running in Kubernetes. Just need to add a CI unit test step for the newly built service and we should be good for our setup 💡
💡 1
e
Hey @Jean Poudroux when you say
Afterwards another CI job pulls the built bento...
How is that orchestration happening? Are you somehow emitting an event that causes the next CI job to run? Also, can you say what you're doing for your model registry? Are you using the official yatai one on kubernetes?
I'm literally now working on this exact workflow for us, but will probably need to start with MLFlow and GitHub Actions
j
Sure! We do a handoff to another CI workflow from the step that builds the bento and puts it on GCS. The handhoff is done by submitting a workflow dispatch event to the github action workflow where we include a reference to the newly built bento. • ci-build-model-and-bento.yaml: CI is triggered, redeploying a pipeline which trains a new model, builds the bento and copies it to GCS. After its finished we do the dispatch to the workflow • ci-build-deploy-bento-service.yaml: pull the bento into the CI, register the bento, containerize it and push it to the docker registry and then submit our kubernetes manifests for the service using our new bento It's not pretty, but it works for us. • We're not using a model registry (either mlflow nor yatai) • We do native kubernetes deployments May look into using a model registry to improve a/b deployments and ease model management in the future.