This message was deleted.
# troubleshooting
s
This message was deleted.
c
if you are in k8s i recommend using mm-less. each task gets its own k8s pod, let cgroups handle resource isolation
j
we have not used that yet, thought it was experimental
c
we use it in production for a lot of our clusters…it works just fine. Don’t be scared to try it 🙂
j
Might try it first in the dev cluster. Is there a big shift in configuration with regards to helm charts and middleManagers?
c
put the values in the overlords and dont deploy the middle managers….
j
In my case that would be in the coordinators pods then
And I guess the nodes for coordinators would need to be bigger? As well as replicas. I wonder if that config reduces cost in resources
If you are using helm templates or k8s yamls, would you mind sharing an example?
c
i use the operator, so i can give you the configs on monday. So i would have independent overlord / coordinator nodes, keep your coordinator about the same in terms of RAM / CPU. The tasks get launched as k8s pods, thus you don’t need to statically allocate resources up front, whatever your eks cluster has basically. If you need more resources, add nodes, if you are not using enough, remove them. We use it with autoscaling here and eks…which has worked out quite nicely in terms of a decent cost savings.
ill share the overlord config, basically don’t deploy the MM in your helm chart, and add some config values to the overlord node and add the extension.
j
What kind of adapter do you use ? I posted this in the k8s channel to try to get some advice https://apachedruidworkspace.slack.com/archives/C04Q0047B4M/p1688144035521209