Hi team, I’am a little confused. My table stopped ...
# general
a
Hi team, I’am a little confused. My table stopped consuming data from a kafka topic. But a new table with the same schema and table can ingest data from the same topic. There’s new data consistently written to this Kafka topic. Any idea why this is happening? I’ve tried and restarted controller, broker, and server nodes, but it didn’t help.
m
Check debug endpoint and server logs
a
Server logs for one table is like the following: Consumed 30443 events from (rate:493.94794/s), currentOffset=391746, numRowsConsumedSoFar=141746, numRowsIndexedSoFar=141746 [Consumer clientId=consumer-null-37, groupId=null] Seeking to offset 391746 for partition metrics-star-topic-0
But for another table, its output is: Consumed 0 events from (rate:0.0/s), currentOffset=3266056, numRowsConsumedSoFar=0, numRowsIndexedSoFar=0 Metrics scheduler closed Closing reporter org.apache.kafka.common.metrics.JmxReporter Metrics reporters closed
Every time I add new data to Kafka topic, the currentOffset is changing, but no new records in pinot table.
m
When you say stopped consuming:
Copy code
1. Was it consuming before?
2. Was there any errors in the server log when it stopped consuming?
3. Was there any other changes on kafka side? For example, did you delete and recreate the topic or did anything that would have made the saved offset in pinot incompatible with new offset available in Kafka?
a
1、yes, it was consuming before. 2、I didn’t notice any error.😅 3、I think I just restarted kafka server once, but didn’t change the topic.
m
My guess is that the offset that is saved in Pinot does not exist in kafka side (what’s the retention in kafka side).
a
7 days
Can I modify the table config by setting stream.kafka.consumer.prop.auto.offset.reset to a timestamp to make the consumption resume?
The table debug info is as below.
@User, I think you’re right. When I run this command: ./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list ysuo-macOS:19092 --topic metrics-star-topic, the offset is much smaller than the one listed in the server log.
When I add more records to the Kafka topic until the offset is larger than the one listed in the server log, the table starts to consume data again.
m
I think we are adding a rest api to trigger restart of ingestion from largest available offset. It is not automatic because there is no way for Pinot to auth derive the right offset once such discrepancy is created upstream. cc: @User