Hey all! I was going through the "Real-Time data flow" documented
here. I had two questions:
1. If there are n replicas of a segment, is it the correct understanding that all n servers consume the events from Kafka and only 1 of them "wins" to commit to deep storage?
2. In a rare event that "all" n servers for a segment go down before the segment could get completed (and committed to deep storage from memory), does this in-memory data get lost forever or do the new servers manage to consume these "lost" events again?