This message was deleted.
# ask-for-help
s
This message was deleted.
a
can you send your bentofile.yaml here?
it would help if you can post the stack trace here as well.
g
Copy code
service: "service.py:svc"
include:
  - "service.py"
  - "configuration.yaml"
docker:
  distro: debian
  base_image: "<http://gcr.io/my-ai-org/debian-py39-cuda116-conda:latest|gcr.io/my-ai-org/debian-py39-cuda116-conda:latest>"
  setup_script: "./setup.sh"
  env:
    BENTOML_CONFIG: "src/configuration.yaml"
No available stack trace, aside from imports failing from packages installed in the
base_image
.
I can install via
setup.sh
, but on every
bentoml containerize
, you have to rebuild those dependencies, which is incredibly slow for some CUDA-enabled libraries.
a
Hmm this might has to do with buildkit caching. I assume you just run with bentoml containerize?
right now I believe that conda is not cache correctly with docker. Essentially we are creating a conda environment everytime we are running
containerize
. This means that if the base image already have a conda environment, the result container from bentoml containerize would not have access to the conda env from the
base_image
one improvement we can make is that for base image that has conda, we can use that conda env instead of creating a new one? cc @Sean