Hi everyone, I have an issue with a k8s deployment...
# troubleshooting
d
Hi everyone, I have an issue with a k8s deployment. Basically controllers are discovered twice: once via headless service and one more time via regular service. The one discovered through the regular services is always reported as "failed" as there is no ZK entry with the FQDN of the service. Is there any way to fix this?
x
headless svc is used for internal pod discovery, e.g. pinot-controller-0, pinot-server-2 …
zk svc/headless-svc are there but not exposed externally
d
I understand but I don't know why Helix is discovering one controller per service. This doesn't happen with the the brokers or any other node type, just the controllers
and I don't know whether this will have side effects
I'd like to have controllers reported correctly
x
from helix side, each controller will register itself
the svc side might be the deep store or vip config?
d
I'm not sure if I follow you
I have deploy the helm chart with the defaults so the FS is the node HD
but I don't know how this could have anything to do with the controllers showing twice
x
hmm, where you find controller showing twice
in k8s or logs or helix ?
d
in the /instances API and therefore in the UI as well
Helix manager returns 2 records for the controller as per my first message
there is a single controller POD
x
can you paste a screen shot?
we will check on that
d
let me see if I can pull a screenshot, I may need to blur some details though.. bear with me
x
sure, in general, each pinot pod will register itself, and it’s true for k8s world, fqdn is just its pod name, svc names are externally, so shouldn’t be counted here
d
Screenshot 2021-09-07 at 19.09.25.png,Screenshot 2021-09-07 at 19.10.40.png,Screenshot 2021-09-07 at 19.11.07.png
I agree, the fact of using kubernetes should not change anything here
Unfortunately I'm still too new to the code and I can't find a solution myself
let me know if that helps
x
hmm, seems something goes wrong, I’ll check
d
cheers!
Hey Xiang, I think we can ignore this. It seems someone else was messing with my ZK. I've just done a fresh deployment and the cluster looks good! Apologies for the inconvenience
👍 1
x
cool, thanks for conforming 😛
👍 1