Slackbot
02/04/2023, 2:27 PMChaoyu
02/04/2023, 7:43 PMAaron Pham
02/05/2023, 12:28 AMlarme (shenyang)
02/05/2023, 2:36 AMpip install -U bentoml
? The latest BentoML shouldn't have this limitationSuhas
02/05/2023, 12:52 PMSuhas
02/05/2023, 1:07 PMlarme (shenyang)
02/05/2023, 7:07 PMSuhas
02/05/2023, 9:27 PMfrom transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokeinzer_data=tokenizer.tokenize("I have a new GPU!")
Initialise model locally
For inference use onnx_runner
onnx_runner.run.run(tokeinzer_data)larme (shenyang)
02/07/2023, 7:18 AMtokens = list(tokenizer("this is a sample", return_tensors="np").values())
runner.run.run(*tokens)
Suhas
02/07/2023, 8:53 AMlarme (shenyang)
02/07/2023, 9:13 AMSuhas
02/07/2023, 9:14 AM