https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • v

    Varun Srivastava

    01/20/2021, 7:53 AM
    Hi @Yupeng Fu
  • v

    Varun Srivastava

    01/20/2021, 7:53 AM
    I was going through doc https://docs.pinot.apache.org/basics/data-import/upsert. I have 2 query
  • m

    Mayank

    01/23/2021, 7:06 PM
    Since your inv index size is huge, better to build offline
  • s

    Suraj

    01/28/2021, 6:55 PM
    2021/01/22 00:15:36.706 ERROR [DataTableHandler] [nioEventLoopGroup-2-3] Caught exception *while* handling response from server: pinot-server-3_R
    java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:175) ~[?:?]
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118) ~[?:?]
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[?:?]
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:758) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:748) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:260) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:232) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.PoolArena.reallocate(PoolArena.java:397) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:119) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.AbstractByteBuf.ensureWritable0(AbstractByteBuf.java:310) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:281) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1118) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1111) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1102) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:96) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281) ~[pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [pinot-all-0.6.0-jar-*with*-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    at java.lang.Thread.run(Thread.java:834) [?:?]
  • k

    Kishore G

    01/29/2021, 7:33 PM
    This is amazing!
    thankyou 1
  • d

    Daniel Lavoie

    01/29/2021, 7:35 PM
    I don’t recall seing this flag on the default args of the helm chart.
  • b

    Buchi Reddy

    02/03/2021, 5:54 PM
    @Buchi Reddy has left the channel
  • a

    Alexander Vivas

    02/04/2021, 4:01 PM
    Guys, is there any way we can manually ingest some data to one already created table from bigquery? For some reason zookeeper stopped working and entered into a reboot loop and that messed up everything, the segments where no longer accessible, our broker is still restarting, we tried several things and none of theme seemed to work, it was working yesterday and we didn't do any changes to the infrastructure configuration and can't find the source of it all yet
  • a

    Alexander Vivas

    02/04/2021, 4:19 PM
    It just keeps writing this to the logs non-stop: ERROR [MessageGenerationPhase] [HelixController-pipeline-default-mls-(eb0a8635_DEFAULT)] Event eb0a8635_DEFAULT : Unable to find a next state for resource: dpt_video_event_captured_v2_REALTIME partition: dpt_video_event_captured_v2__0__24203__20210124T1614Z from stateModelDefinitionclass org.apache.helix.model.StateModelDefinition from:ERROR to:ONLINE
  • a

    Alexander Vivas

    02/04/2021, 4:19 PM
    It's no longer consuming data from kafka 😞
  • k

    Kishore G

    02/04/2021, 4:19 PM
    what happened?
  • k

    Kishore G

    02/04/2021, 4:20 PM
    why did zk go down?
  • a

    Alexander Vivas

    02/04/2021, 4:21 PM
    We still don't know, today we saw our analytics dashboards in bad shape and when we had a look at the infra we saw this
  • k

    Kishore G

    02/04/2021, 4:21 PM
    btw, the segments will be in segment store
  • k

    Kishore G

    02/04/2021, 4:21 PM
    you can always bring everything back up
  • a

    Alexander Vivas

    02/04/2021, 4:22 PM
    Screenshot 2021-02-04 at 17.21.42.png
  • m

    Matt

    02/04/2021, 4:34 PM
    @Alexander Vivas I had issues with zookeeper and running 5 instances of it seems to help a lot. This allows 2 instances to go down safely. Also I had to bump xmx to match the load.
  • d

    Daniel Lavoie

    02/04/2021, 4:35 PM
    You should start with investigating the errors from Zookeeper.
  • d

    Daniel Lavoie

    02/04/2021, 4:35 PM
    Get to the root cause of the pod crashing.
  • a

    Alexander Vivas

    02/04/2021, 4:41 PM
    Yeah it was only one pod that started rebooting like crazy, we use to work with 3 instances
  • d

    Daniel Lavoie

    02/04/2021, 4:44 PM
    What is the root cause of the restart?
  • m

    Matt

    02/04/2021, 4:51 PM
    maybe try replacing zookeeper snapshot folder in 1 with the working one from 0/2 and restart . it may work
  • a

    Anshu Jalan

    02/05/2021, 2:30 PM
    @Anshu Jalan has left the channel
  • a

    Ashish

    02/07/2021, 12:26 AM
    Is this problematic or something that can be ignored?
  • p

    Pradeep

    02/09/2021, 7:07 PM
    Wondering if anyone know what’s happening?
  • e

    Elon

    02/10/2021, 1:45 AM
    Untitled
    Untitled
  • d

    Devashish Gupta

    02/10/2021, 11:10 AM
    Can it be done with a similar job with UpdateTable as args along with updated schema?
  • s

    sagar

    02/11/2021, 7:14 AM
    Is there any config in jobspec where we can ignore some prefix or suffix of files? like folder has csv files and other formats , but we want it to ignore others ??
  • j

    jose farfan

    02/13/2021, 3:07 AM
    Hi, I have basic pinot deploy, with realtime table. Everything was working ok the first 2 days, but now, I am getting error with this query: "SELECT player_nr, processTime, id FROM transaction_line_REALTIME LIMIT 214748364"
1...141142143...166Latest