This message was deleted.
# ask-for-help
s
This message was deleted.
b
hello @David Griffiths of course. Sorry about the delay. Let me get something for you soon and unblock you to moving forward. Also, love to hop on a call to learn more about what you are working on. I will follow up in DM
d
Many thanks Bo. That would be great. I'll update my team here in London
s
Hi @David Griffiths, could you please be more specific about encryption support? Do you mean adding an HTTPS endpoint?
d
Hi Sean, Thanks for looking into this. No, it is not an HTTPS endpoint that we are looking to implement We are looking to protect the IP of the pre-trained model that will be served in our BentoML container. We want to encrypt the model and so will need to decrypt the model before it can be served. Our use case is similar to the use-case put forward in https://github.com/bentoml/BentoML/issues/1824. In the response to the issue your colleague offered some links to documentation for BentolML that still lead to 404's: https://docs.bentoml.org/en/latest/api/core.html#model https://docs.bentoml.org/en/latest/guides/custom_framework.html What we were looking for is for this documentation, or similar, to help us protect our IP when our model might being served on a client deployment Kind regards, David
I was enquiring when this documentation might be forthcoming ...
@Sean @Bo for visibility
w
Has there been any progress on the documentation? I see the issue mentions the
Custom Artifact
class, is it thought this will be possible with a
Custom Runner
? @Bo @Sean
s
@David Griffiths @Will Barnsley Sorry for the delay. The documentation was unavailable because they were outdated. Model encryption is not currently supported out-of-the-box. I think the easiest way to support model encryption is through the following steps. 1. After
save_model
, manually encrypt the model file saved under
$BENTOML_HOME/models
. Encrypt model files only and do not encrypt
model.yaml
. 2. Customize the `ENTRYPOINT` using Dockerfile template to decrypt the model in the container before starting the service. 3. Build and containerize the bento.
w
Thanks for that @Sean but I believe that would still result in the files being unencrypted on the container. If somebody was able to bash into the docker container the files would be unencrypted.
s
Yeah, you are right. That wouldn’t work if the goal is to not have unencrpyted files in the container.
w
I had toyed with decrypting the model prior to making the runner and saving this to a temp dir. I then used this dir to init the runner and service. This worked up until the service reloading and seeing the file is no longer present.
That is the goal
s
I see. That’s very creative. 🙂 We will then have to decrypt the model files in the BentoML process. Which will require updating the
save_model
and
load_model
APIs to encrypt and decrypt the model files in memory. I don’t think that is too hard to add a callback function to these APIs.
How do you plan to pass in the decryption key?
w
ok, and could that be done using a
Custom Runner
build inheriting from the
Transformer
runnable? that is tbd
s
Using
custom runner
to do this is possible but you will have to implement all the model packaging and loading logic, unfortunately.
What ML framework do you use?
w
hmm ok that is useful to know what exactly would be involved.
transformers
currently but may also have route with a custom framework.
s
Would you be open to contributing the encryption functionality to the transformers framework? We can meet over Zoom and discuss the design. 🙂
w
I would be interested yeah!