Hello folks, I have some questions about memory co...
# troubleshooting
y
Hello folks, I have some questions about memory consumption of server instances. • How much RAM will be taken(roughly estimated) by a segment if I ingest 687MB csv, which generates 109MB segment gzipped tar file? • I have 5 server nodes and each has 4272/3884/4438/3493/3661 segments. They took 27/8/6/3/18 Gi RAM each. What makes them different from each other? kind of raw data cache? Thanks in advance.
r
hi, the first question is impossible to answer, it depends on the data type, the cardinality of the columns, the compression configuration, what indexes you have and so on
as for the second question, this is because of partitioning, what column are you partitioning on, and is there skew in the distribution of number of records or of the size of the records?
y
@User Thanks for reply. I didn't specify partition option for tables. About distribution, records seem to be distributed appropriately.
About size consumption, could you give me a case you know or a public example?
Additionally, I am running servers on k8s cluster. After deleting 27Gi RAM usage pod, it took only 13Gi RAM. Could it be another clue for this?