So we found ourselves in a position where we had t...
# troubleshooting
s
So we found ourselves in a position where we had to decrease kafka partitions for realtime and ended up deleting our consuming segments. We are looking for how we tell Pinot to start creating consuming segments again.
m
@saurabh dubey @Neha Pawar ^^
Could you try the reset api @Stuart Millholland
s
ok trying that
We run this
And still get this
Copy code
{
  "_segmentToConsumingInfoMap": {}
}
m
What command did you run to reset?
s
curl -X POST "http://localhost:9000/segments/immutable_events_REALTIME/reset" -H "accept: application/json"
We've tried to change things in the table config like replication, name of kafka topic. We tried to increase the number of kafka partitions. We can't seem to get the table to create consuming segments again after we manually nuked them all.
n
reset won’t help if you’ve deleted the CONSUMING segments. try this one https://github.com/apache/pinot/pull/8663
s
We will try
n
though i think this is not part of 0.10.0. are you using latest docker image or 0.10.0?
s
oh yikes yeah we are 0.10
so we are still in development and can just nuke the realtime table and restart, we are trying to pretend this is prod though so seeing if we can get out of this situation
n
resumeConsumption would’ve worked, from future releases. it was designed for exactly such situations
s
ok, good to know that's coming!