https://pinot.apache.org/ logo
t

Tanmay Movva

12/02/2020, 4:31 AM
Hello, I am getting this error on one of the realtime tables
Copy code
ERROR [LLRealtimeSegmentDataManager_spanEventView__0__15__20201201T0448Z] [spanEventView__0__15__20201201T0448Z] Could not build segment
java.lang.IllegalStateException: Cannot create output dir: /var/pinot/server/data/index/spanEventView_REALTIME/_tmp/tmp-spanEventView__0__15__20201201T0448Z-160688315943
because of which pinot is not able to build segments/ingest data. How to debug this?
Even when pinot is not able to ingest and the lag is increasing, the table and segment status on UI is
GOOD
.
btw, the state of this segment is
CONSUMING
k

Kishore G

12/02/2020, 4:53 AM
Any things else in the log?
t

Tanmay Movva

12/02/2020, 4:54 AM
No info logs were not enabled. Only error was the above one. But I checked at the path it is trying to create a directory. There already exists a directory in the
_tmp
path.
k

Kishore G

12/02/2020, 4:55 AM
Are any other segments getting created?
t

Tanmay Movva

12/02/2020, 4:55 AM
Yes. Only this table was affected.
k

Kishore G

12/02/2020, 4:56 AM
I am guessing you have enough disk
t

Tanmay Movva

12/02/2020, 4:57 AM
Yes.
x

Xiang Fu

12/02/2020, 4:58 AM
is this directory local or on remote?
t

Tanmay Movva

12/02/2020, 4:59 AM
This directory is on the volume attached to the pod. So local.
x

Xiang Fu

12/02/2020, 4:59 AM
your volume is local disk or remote like ebs?
t

Tanmay Movva

12/02/2020, 4:59 AM
ebs volume.
x

Xiang Fu

12/02/2020, 5:00 AM
all are segment persist failed on that volume?
can you try to create a file on that volume
try to access it through the pinot server container ?
t

Tanmay Movva

12/02/2020, 5:02 AM
Was able to do that. Remaining realtime segments are still able to ingest and serve.
x

Xiang Fu

12/02/2020, 5:04 AM
hmm
is there any log inside the pinot server container
there is a file pinotServer.log
t

Tanmay Movva

12/02/2020, 6:04 AM
I deleted and recreated the table and now everything is running fine. Not sure what was the issue. @Xiang Fu I will check the pinoServer.log and share if I find anything.
One more thought, when the freshness metric for a table is increasing for any reason(server unresponsive, error connecting to kafka, etc,.) Shouldn’t the status of table be “not GOOD”? When I checked the table and segment status on UI when facing this issue all status were
GOOD
x

Xiang Fu

12/02/2020, 6:10 AM
agreed
if you can provide more info, then we can fix this
also need to distinguish this with the scenario like kafka upstream has no data coming
in certain cases, we need to define what is “good” status
t

Tanmay Movva

12/02/2020, 6:13 AM
also need to distinguish this with the scenario like kafka upstream has no data coming
In that scenario, comparing last consumed offset in pinot and latest available offset in kafka should help, yes? Also I am assuming the current freshness metric is measured from current time. If the data is not available in kafka, then this would keep on increasing.