hey folks, does anyone have experience with “zombi...
# general
e
hey folks, does anyone have experience with “zombie” tables? I attempted to add a tableSpec but was missing a field in the config. This resulted in a null pointer exception when Pinot processed the config. As a result, the table doesn’t exist in the UI or by querying the list API, but I can’t create a table with the same name due to receiving an error that “table already exists”. Presumably the spec reached ZK or something, but no where else? What can be done to remove the zombie table?
m
Hmm this seems like a bug to fix. Could you please file a GH issue?
p
i ended up in a similar situation earlier this week. pinot cluster couldn't route to kafka cluster so when i submitted table config to create the table, the table was never created externally or visible via UI. infact swagger api to get all tables didn't return it in the result. however if i tried creating the table again, cluster would respond back with an error saying that the table already exists.
n
I'm assuming you already tried to invoke DELETE /tables/tablename Api but that didn't work? I've had this happen once before, but invoking a delete cleared out the zombie states
p
Yep delete didn't work.
☝️ 1
m
Lets please file a GH issue, it helps track and fixes these.
cc: @User
t
Hi @User and @User, do you have an example table config with the missing field that created the zombie table. Trying to re-create this bug locally.
n
I had the same issue multiple time last week : • Try to addTable if config error or internal error happens (specify an SSL path for example that does not exist) • It returns an error in logs - sometime you also can have cannot build kafka consumer error • Using rest api / ui or pinot tools does not show any tables • Deleting with rest api doesn’t work (as GET return empty array) • Try to create the table again give an error, I already talk about that to @User last week In my case I’m on k8s so I delete all PVC (data) and deploy It again to get rid of the issue.
example config that would raise the issue :
Copy code
"tableIndexConfig": {
        "loadMode": "MMAP",
        "nullHandlingEnabled": true,
        "streamConfigs": {
          "streamType": "kafka",
          "security.protocol": "SSL",
          "ssl.truststore.type": "PKCS12",
          "ssl.keystore.type": "PKCS12",
          "ssl.truststore.location": "/I/DONT_EXIST.p12",
          "ssl.keystore.location": "/I/DONT_EXIST.p12"
          "ssl.truststore.password": "${SSL_TRUSTSTORE_PASSWORD}",
          "ssl.keystore.password": "${SSL_KEYSTORE_PASSWORD}"
        },
👍 1
(I removed broker list, etc but you can take the
ssl.
block )
I put that on Github issue
m
Thanks @User , link to GH issue?
n