This message was deleted.
# general
s
This message was deleted.
a
Pulsar with ZK currently supports up to 1M topics.
30M topics is too much 🙂
h
One way we could break it apart would create 30 or 40 million topics
It maybe hard to support too much with Zookeeper, the new metadata impl Oxia may support. But we’d better not use it in this way, because it will cost a lot of resoruces
Another way may create a few thousand to a few hundred thousand topics, some of which may be very large, and others may be very small.
Some topics may be very large, and
large
means high throughput or high data storage? I think it will be OK
j
@Hang Chen Sorry for the delayed response. Large in terms of storage size. Throughput is relatively modest overall.
h
Another way may create a few thousand to a few hundred thousand topics, some of which may be very large, and others may be very small.
The offloader can support large storage size, and this way will work for you
j
the highest throughput demands in our system are on recent data, which should generally be cached on NVMe local disks before being eventually moved to GCS. Am I over thinking this? Do we need to worry about the overall size of the topic in terms of number of messages?
h
You are right. We can keep the recent data both in BookKeeper and GCS, and Pulsar can control read data from which storage system when data is stored in both systems, and you can set the reading to BookKeeper to get high throughput and low latency.
Do we need to worry about the overall size of the topic in terms of number of messages?
Do not need to worry about it.
j
Thank you! I really appreciate the help Hang.