This message was deleted.
# ask-for-help
s
This message was deleted.
b
Hello Alex. Happy thanksgiving Right now, the deployment tool, bentoctl does not work out of the box with Amazon elastic inference. Since bentoctl generates terraform files. It should be straightforward to change
a
Thanks! Do you know if anyone used BentoML with this service? Is it a good use case?
I’m thinking we could save on renting GPU servers since our workload is not that large at the moment. Is there any other, better, was to use BentoMl for “on demand” predictions?