Hello, I deployed Pinot in k8s cluster and can see...
# pinot-perf-tuning
m
Hello, I deployed Pinot in k8s cluster and can see the memory is reported incorrectly actually v high. top command inside the node and docker stats are showing correct usage ~ 2GB. However K8s and prom metrics all are reporting high usage(25G).
x
is it memory mapped size?
can you give a snapshot of the metric screen
and the metrics name
m
Thanks @Xiang Fu, it is the memory . I can get the actual metric name
@Xiang Fu
Top  mem usage is 2.7% 30302 root   20 0 60.4g 884620 155640 S 0.7 2.7 1:31.34 java docker stats 949a16209c01    k8s_server_pinot-server-0_log_65bf2d4c-59ce-464e-9533-2121d4e78036_0                  0.40%       735.6MiB / 26GiB   2.76%       0B / 0B      0B / 0B      88 kubectl metrics “metadata”{“name”“pinot-server-0”,“namespace”“log”,“selfLink”“/apis/metrics.k8s.io/v1beta1/namespaces/log/pods/pinot-server-0”,“creationTimestamp”“2020 12 08T234902Z”},“timestamp”“2020-12-08T234753Z”,“window”“30s”,“containers”[{“name”“server”,“usage”{“cpu”“8345387n”,“memory”“1677124Ki”}}]}
x
ic
i think k8s is reporting the container memory usage
pinot memory mapped data files, that might take the container memory as cache
m
ohh ok so the mapped data files which is in disk is also reported as memory usage in container right?
if it cross the req limit the container will get killed. If I raise the mem req too high pod wont be scheduled as I have to allocate v high spec node.
potentially a Java mem reporting issue for k8s rather than Pinot I guess
x
yes
did you observed the pod got killed?
it shouldn’t cross the container limit
cached data will be cleaned
m
almost got killed, 26G is the limit the mem was on 25G..
ohh ok, it will get cleaned by itself?
x
Yes, that’s how memory map works
m
Thanks @Xiang Fu