This message was deleted.
# troubleshooting
s
This message was deleted.
s
Do you see anything in the historical logs ? It could also be kubernetes is killing them. Can you post your historical config?
s
@Sergio Ferragut The pod is restarting with 137 error code and the logs has this
Copy code
1546.900: [GC (Allocation Failure) 1546.900: [ParNew: 791531K->46504K(843456K), 0.0636735 secs] 14092523K->13347496K(52335104K), 0.0639037 secs] [Times: user=0.70 sys=0.00, real=0.07 secs]
s
I'm sorry, I should've translated lakhs before I answered. I think you have way too many segments for that volume of data. Meaning that you have segments that are like 300 kb. Which is tiny. Normal segment sizes are around 1000 times bigger than that. The ideal segment size is about 500MB which means that you should have about 200 segments for 100GB of data. The large number of segments is probably a part of the problem. Also, the recommendation is that you use 24GB of Heap maximum per historical. If you share you historical config, we might be able to figure out why it is consuming so much memory. You can use compaction to merge a lot of small segments into bigger more efficient segments.