Hi team, we still had Pinot (0.10.0) consuming fro...
# troubleshooting
g
Hi team, we still had Pinot (0.10.0) consuming from Kafka and yesterday noticed that it marked the table as BAD with the message “Ideal segment count:8 does not match external segment count: 454”. Looking at the logs I found the following message:
Copy code
2022/10/12 02:01:04.242 INFO [PeriodicTaskScheduler] [pool-8-thread-5] Starting RetentionManager with running frequency of 21600 seconds.
2022/10/12 02:01:04.242 INFO [BasePeriodicTask] [pool-8-thread-5] [TaskRequestId: auto] Start running task: RetentionManager
2022/10/12 02:01:04.244 INFO [ControllerPeriodicTask] [pool-8-thread-5] Processing 1 tables in task: RetentionManager
2022/10/12 02:01:04.251 INFO [RetentionManager] [pool-8-thread-5] Start managing retention for table: events_REALTIME
2022/10/12 02:01:05.369 WARN [TimeRetentionStrategy] [pool-8-thread-5] Segment: events__1__105__20220929T1204Z of table: events_REALTIME has invalid end time in millis: 9011824788000
2022/10/12 02:01:05.370 INFO [RetentionManager] [pool-8-thread-5] Deleting 449 segments from table: events_REALTIME
Could that invalid end time have something to do with the state mismatch?
The segment in question is still listed as good with it’s metadata being:
Copy code
{
  "segment.crc": "3216322958",
  "segment.creation.time": "1664453041719",
  "segment.download.url": "<s3://deep-store/segments/events/events__1__105__20220929T1204Z>",
  "segment.end.time": "9011824788000",
  "segment.flush.threshold.size": "186190",
  "segment.index.version": "v3",
  "segment.realtime.endOffset": "786277459",
  "segment.realtime.numReplicas": "1",
  "segment.realtime.startOffset": "786091269",
  "segment.realtime.status": "DONE",
  "segment.start.time": "1556399149000",
  "segment.time.unit": "MILLISECONDS",
  "segment.total.docs": "186190"
}
Note the start and end times.