This message was deleted.
# ask-for-help
s
This message was deleted.
👀 1
🚀 1
🍱 1
a
Hi @Judah Rand, you can mount
bento_configuration.yaml
via environment variable
BENTOML_CONFIG
Please see more here https://docs.bentoml.org/en/latest/guides/configuration.html#configuring-bentoml
j
@Aaron Pham That doesn’t help me when deploying with
bentoctl
as far as I can tell
Or do I have to pass
BENTOML_CONFIG
as an env var to
bentoctl
?
If so that doc does not make that clear at all given that
bentoml
and
bentoctl
are two separate cli tools
a
you can pass in the environment variable. bentoctl is just a tool to deploy bentoml right. So I believe BentoML environment should just work as expected.
j
@Aaron Pham This 100% does not work.
If I do
BENTOML_CONFIG=$PWD/bentoml_configuration.yaml bentoml containerize <PLACEHOLDER>:latest
and then
bentoml containerize <PLACEHOLDER>:latest
then all layers of the resulting container image are already cached.
Therefore, the config can’t be any different.
There seems to be no way to pass static config into a Bento container image…
This seems like a pretty significant oversight 🤔 It means that AutoBatching is basically unusable as there is no way to override the
max_latency_ms=10000
default.
It seems to me the only place that the
BENTOML_CONFIG
is used is in
bentoml serve
.
a
you have to mount the environment variable to the container
Copy code
docker run --env BENTOML_CONFIG=/home/bentoml/bentoml_config.yml -v /path/to/config:/home/bentoml/bentoml_config.yml ...
j
@Aaron Pham And how do I do that in Google Cloud Run?
I do understand exactly what the issue is now (having spent a decent chunk of the afternoon experimenting) and realize that by editing the
<http://main.tf|main.tf>
substantially I can probably get this to work. But my point is now that this is a gaping hole in the deployment story using
bentoctl
a
cc @jjmachan probably have a better idea about google cloud run
b
@Judah Rand you are right. This is a very big oversight. We will try adjust this in the next release for google cloud run and to other deploy target afterwards
Are you using it in production right now?
j
Are you using it in production right now?
Not atm but aiming to in the very near future assuming we can get it to work. The aim is to have this be the ‘template’ for how we’d like to deploy internally facing models going forwards and potentially customer facing models further out.
👍 1
s
The
v1.0.6
release introduced a capability to configure BentoML through environment variable. Checkout the doc to see if it could help you. https://docs.bentoml.org/en/latest/guides/configuration.html#overrding-configuration-with-environment-variables