Matt
02/23/2021, 6:51 PMSubbu Subramaniam
02/23/2021, 7:03 PMMatt
02/23/2021, 8:45 PMSubbu Subramaniam
02/23/2021, 9:43 PMnumPartitions * numReplicas
? If not, then it is likely that the threads that consume some of the partitions are delayed because other threads are not giving up the cores.Matt
02/23/2021, 10:22 PMSubbu Subramaniam
02/23/2021, 10:27 PMwhile (true) {
pullMsgsFromKafka();
if (there are no msgs) {
sleep a little
}
}
Matt
02/23/2021, 11:06 PMSubbu Subramaniam
02/25/2021, 7:44 PMmmap
in offheap. Since this piece of memory is always being written to, pages are dirty all the time, and so the OS may aggressively start flushing to disk. I would imagine this is what you are experiencing. Running vmstat
on the box may give you some idea (or, if you have operating system metrics, you can look at pagein pageout metrics) to reconfirm. Alternatively, if you are making segments very frequently then this can happen. You may want to look at https://docs.pinot.apache.org/operators/operating-pinot/tuning/realtime#tuning-realtime-performance to set an optimal segment size.Matt
02/25/2021, 8:16 PMOld Instance:
Maximum bandwidth (Mbps) = 850
Maximum throughput (MB/s, 128 KiB I/O) =1 06.25
Maximum IOPS (16 KiB I/O)=6,000
New Instance:
Maximum bandwidth (Mbps) = 4,750
Maximum throughput (MB/s, 128 KiB I/O) =593.75
Maximum IOPS (16 KiB I/O)=18,750
# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 66201480 <tel:374056669552|3740 56669552> 0 0 22 399 32 19 1 0 99 0 0
# free -m
total used free shared buff/cache available
Mem: 127462 7466 64649 1 55346 118860
Swap: 0 0 0