Hi, I am running 3 different runner running on GPU...
# ask-for-help
k
Hi, I am running 3 different runner running on GPU. All those runners are processing or detect bbox from image sequentially. I am curious whether it is possible to load image tensor in shared memory space of K8S node and use among runners. Because I deploy bento with yatai and it seems that intermediate tensor are always transfer through the K8S service network to deliver at each runner's pod. It would be great to reduce the tensor transfer among bento service and runner pod. Is there any advice or future plan?