This message was deleted.
# ask-for-help
s
This message was deleted.
b
Hello @Louis Latreille welcome
I think the stable diffusion is not optimized for batching at the moment.
I would love to see your benchmark, if you are working on one
And feel free to make PR for batching capacity
l
we disabled batching for these reasons: 1. in our offline testing batching seems not give us much performance gain 2. using fp32 model, when batch_size > 2, the inference process will blow up a 3080 ti, hence the batch size cannot be too large 3. inputs inside same batch can only have different prompts, but with the same other parameters. This is especially inconvenient for img2img because it requires
init_image
to be the same inside a batch.
l
Great! Thanks for the answers! That makes sense.