Is there a way to check that pinot controller is u...
# getting-started
d
Is there a way to check that pinot controller is up and running before starting broker and broker is up and running before starting server? And finally running "AddTable"? I am setting up a kafka/pinot/streamlit kube cluster and the "AddTable" script is inside yaml Without this dependency setup, I am currently ending up with dead servers and brokers and the "AddTable" script leaves tables in a bad state as they are associated with these dead servers and brokers
l
maybe check its health endpoint?
d
@Luis Fernandez can you elaborate? is there a command that can be executed via a bash script?
I added a wait-for-it.sh call to ensure controller is reachable. But the AddTable calls still fail if the broker has not started yet
Even after adding wait-for-it.sh for controller, broker and server, I end up with a bunch of dead controllers, brokers and servers and failing AddTable It all works beautifully with docker-compose, but not with kubectl Has anybody been able to AddTable as part of kube yaml successfully so that it works perfectly every time?
l
so you are checking for the health endpoint?
d
anything that blocks until pinot is up and running and ready to accept AddTable calls
l
yeah so do it on the server hit health till it’s okay and you should be able to execute addschema and addtable calls
d
does health endpoint block until controller/broker/server are all running or just the controller?
l
each component has a health endpoint
so you bring each component up sequentually check its health endpoint and then go to the next one
d
ok. let me try that
@Luis Fernandez controller and broker return ok, but GET /health on server just times out without returning anything, even though server is running fine. Here's the call: curl -s '<http://3.135.78.71:8098/health%7Chttp://&lt;MY_PINOT_SERVER_IP&gt;:8098/health>'
l
really? that’s odd
that’s what we use in kubernetes in order for us to mark servers as alive
d
Even controller/broker health checks do not help. Dead servers/brokers and bad status tables. See log excerpt below: pinot-controller:9000/health returned: OK after 120 seconds pinot-broker:8099/health returned: OK after 8 seconds WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2022/11/15 203146.395 INFO [AddTableCommand] [main] Executing command: AddTable -tableConfigFile /config/VdsServerAccessLogs-AggregatedEntriesPerSec_table.json -offlineTableConfigFile null -realtimeTableConfigFilenull -schemaFile /config/VdsServerAccessLogs-AggregatedEntriesPerSec_schema.json -controllerProtocol http -controllerHost pinot-controller -controllerPort 9000 -user null -password [hidden] -exec 2022/11/15 203146.658 INFO [AddTableCommand] [main] {"code":400,"error":"TableConfigs: VdsServerAccessLogs_AggregatedEntriesPerSec already exists. Use PUT to update existing config"}
l
Copy code
VdsServerAccessLogs_AggregatedEntriesPerSec already exists
d
the error message is misleading, btw. The tables are in bad status and associated with dead broker/server
l
where are the bad status of the tables?
maybe there’s something else happening preventing the servers from coming back online?
d
There's new servers/broker. But tables are added/associated with the dead ones. I have a bug out on that issue - https://github.com/apache/pinot/issues/9793
😮 1