<https://apache-pinot.slack.com/archives/CDRCA57FC...
# troubleshooting
m
https://apache-pinot.slack.com/archives/CDRCA57FC/p1602711095152100 I'm following up my thread related to the real time ingestion here..
n
can you share your table config and schema?
m
revenue-schema.json,revenue-table-rt.json
n
the issue might be the schema name. the table config has
"schemaName": "revenue",
whereas the schema has
"schemaName": "revenue_test_murat",
Does your pinot-server log show absolutely no warning/error/exception message?
and which version of Pinot are you on? The newer versions should have blocked creating the table config with misssing schema
m
I'll reverify your point with schema mismatch
we're running 0.5.0
ok I verified the schema @Neha Pawar the table's schema is revenue_test_murat
I'm tailing the server log
and it doesn't print anythin
n
did you delete and recreate table config after fixing the schema name?
m
I did and I can retry.. But if that was the issue, why would it ingest only the first 50000 rows? "segment.flush.threshold.size": "50000"
n
because it’s not able to complete the segment. the consumer consumed 50k rows based on this config and is not able to move forward coz segment creation is failing
m
batch process works by the way
but its another table(offline) afaik
n
can you share the whole controller and server log
m
Copy code
Could not build segment
java.lang.NullPointerException: null
        at org.apache.pinot.core.segment.creator.impl.SegmentColumnarIndexCreator.addColumnMetadataInfo(SegmentColumnarIndexCreator.java:535) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.segment.creator.impl.SegmentColumnarIndexCreator.writeMetadata(SegmentColumnarIndexCreator.java:489) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.segment.creator.impl.SegmentColumnarIndexCreator.seal(SegmentColumnarIndexCreator.java:399) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.segment.creator.impl.SegmentIndexCreationDriverImpl.handlePostCreation(SegmentIndexCreationDriverImpl.java:240) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.segment.creator.impl.SegmentIndexCreationDriverImpl.build(SegmentIndexCreationDriverImpl.java:223) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.realtime.converter.RealtimeSegmentConverter.build(RealtimeSegmentConverter.java:127) ~[pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.buildSegmentInternal(LLRealtimeSegmentDataManager.java:742) [pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.buildSegmentForCommit(LLRealtimeSegmentDataManager.java:693) [pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:604) [pinot-all-0.5.0-jar-with-dependencies.jar:0.5.0-d87bbc9032c6efe626eb5f9ef1db4de7aa067179]
        at java.lang.Thread.run(Thread.java:832) [?:?]
Pr
I've found an error
but after that, no matter how much data comes to kafka, it does not generate any error
n
like i said before, the consumption is going to stop, if it cannot create the segment.
i dont know if this was included in the 0.5.0. Checking
To unblock, you could try without
aggregateMetrics
, or build from source
yup, that fix is not part of 0.5.0. Could you build from source?
m
You're right It worked without aggregation.
For the sake of poc i'll stop here. But im suprised to have a bug in such a fundemantal feature :(
Thx a lot for your help
n
i’m also surprised 🙂 this feature is being used in some places i believe. So my hunch is that it is the combination of
aggregateMetrics: true
+
columnMinMaxValueGeneratorMode: ALL
. I have a feeling it may work fine if you remove columnMinMaxValueGenerator. And fwiw, it has been fixed on master and will be available in the next release
@Mayank this flag is used at LinkedIn right? How does it work inspite of this: https://github.com/apache/incubator-pinot/pull/5862 ? Is there something specific in this table config that might be triggering this?
m
IIRC, this was a bug that got introduced and we hit the same problem at LinkedIn. I believe that #5862 fixed the issue
👍 1
@Neha Pawar ^^