https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • m

    Mayank

    07/22/2020, 5:15 AM
    It is possible that those queries are waiting on the server side long enough, and hit timeout when they start executing
    b
    • 2
    • 13
  • y

    Yash Agarwal

    07/22/2020, 12:48 PM
    Hey, I am getting the following error.
    Copy code
    Caused by: java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.f$14 of type org.apache.spark.api.java.function.VoidFunction in instance of org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1
    I am using
    Copy code
    PINOT_VERSION=0.4.0
    With the following overridden versions configs to match the environment
    Copy code
    <scala.version>2.11.8</scala.version>
    <spark.version>2.3.1.tgt.17</spark.version> (which is specific to target)
    Env:
    Copy code
    Spark version 2.3.1.tgt.17
    Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_73
    Run Command:
    Copy code
    spark-submit \
      --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
      --master yarn \
      --deploy-mode client \
      --conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins -Dlog4j2.configurationFile=${PINOT_DISTRIBUTION_DIR}/conf/pinot-ingestion-job-log4j2.xml" \
      --conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar" \
      local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar \
      -jobSpecFile /home_dir/z00290g/guestSdrGstDataSgl_sparkIngestionJobSpec.yaml
    • 1
    • 1
  • y

    Yash Agarwal

    07/23/2020, 4:48 PM
    What is the correct schema for a date column ? i am using the following,
    Copy code
    {
      "name": "sls_d",
      "dataType": "STRING",
      "format": "1:DAYS:SIMPLE_DATE_FORMAT:yyyy-MM-dd",
      "granularity": "1:DAYS"
    }
    but i am getting
    Copy code
    Caused by: java.lang.IllegalArgumentException: Invalid format: "null"
    	at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[pinot-all.jar:0.4.0-8355d2e0e489a8d127f2e32793671fba505628a8]
    n
    k
    • 3
    • 12
  • x

    Xiang Fu

    07/24/2020, 5:14 AM
    do you know the input path
    k
    • 2
    • 1
  • p

    Pradeep

    07/25/2020, 2:06 AM
    @Kartik Khare or anyoneelse take a look at this MR when you get a chance? https://github.com/apache/incubator-pinot/pull/5755 (Adding default credentials provider, uses common credential provider chain and a bug fix)
    k
    • 2
    • 2
  • r

    Ravi Singal

    07/25/2020, 12:05 PM
    we are running pinot on a kubernetes cluster. the pinot server pods takes a lot of time to come up during rolling restart. each pod is taking around 4-5 minutes. Is this time proportional to number of segments in the server? How can I reduce the startup time of the servers?
    x
    • 2
    • 5
  • p

    Pradeep

    07/25/2020, 11:04 PM
    Hi, I am trying to use
    KafkaConfluentSchemaRegistryAvroMessageDecoder
    We have a schema registry set up with SSL authentication. I am getting
    SSLHandshakeException
    Wondering what is the proper way to pass the SSL certs config for the schema registry client? I digged a bit into the code, it seems like pinot needs to update the schema-registry-client to include this (https://github.com/confluentinc/schema-registry/pull/957) with some code changes. Can be accomplished without it too. Wanted to check before if there is an alternative way to accomplish this?
    e
    d
    • 3
    • 12
  • p

    Pradeep

    07/26/2020, 6:38 PM
    Sorry I am facing another issue while querying the table, after fixing the above issue with this code-change (https://github.com/apache/incubator-pinot/pull/5758). KafkaConsumer seems to be working fine now based on the logs on server node.
    Copy code
    Caught exception while processing instance request
    java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
            at org.apache.pinot.core.common.datatable.DataTableBuilder.setColumn(DataTableBuilder.java:157) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.query.selection.SelectionOperatorUtils.getDataTableFromRows(SelectionOperatorUtils.java:261) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.blocks.IntermediateResultsBlock.getSelectionResultDataTable(IntermediateResultsBlock.java:348) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.blocks.IntermediateResultsBlock.getDataTable(IntermediateResultsBlock.java:262) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.blocks.InstanceResponseBlock.<init>(InstanceResponseBlock.java:43) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:37) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:26) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
            at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:48) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-39a78f3df43ac613663975844e598f07b18bf623]
    d
    • 2
    • 12
  • e

    Elon

    07/27/2020, 5:00 PM
    FYI, on pinot-0.4.0 , after deleting a segment which was in ERROR state the realtime table is no longer ingesting. Is there an api to restart ingestion?
    n
    • 2
    • 57
  • e

    Elon

    07/27/2020, 8:35 PM
    Untitled
    Untitled
    n
    j
    • 3
    • 17
  • e

    Elon

    07/27/2020, 9:35 PM
    Ok, everything's working now, thanks for all the help, much appreciated!
    n
    • 2
    • 1
  • e

    Elon

    07/27/2020, 9:36 PM
    We wanted to create some grafana dashboards, do you have a generic one that we can use to monitor pinot (i.e. server disk space for segments, jmx counters for qps, etc.)?
    x
    • 2
    • 10
  • y

    Yash Agarwal

    07/30/2020, 6:38 AM
    Hey, How can I pass auth, during segment pull ? It is over HTTP, hence adding it as user info in the segment uri prefix. But on redirect it does not retain the user info and fails with 401. Any suggestions ?
    • 1
    • 1
  • o

    Oguzhan Mangir

    07/30/2020, 7:36 PM
    Hi, i'm trying to receive data using pinot scatter-gather api(
    pinot.core.transport.{AsyncQueryResponse, QueryRouter, ServerInstance
    ) in pinot 0.5.0-snapshot version. i'm running locally now, and using pinot 0.4.0 version in locally now(because when i try to up pinot with the master branch, it can not load the data. there maybe a problem). Can there any problem about backward compatibility? error message;
    Copy code
    ERROR org.apache.pinot.core.transport.DataTableHandler - Caught exception while handling response from server: 192.168.2.154_O
    java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
    	at org.apache.pinot.core.common.datatable.DataTableImplV2.<init>(DataTableImplV2.java:122) ~[classes/:?]
    	at org.apache.pinot.core.common.datatable.DataTableFactory.getDataTable(DataTableFactory.java:35) ~[classes/:?]
    	at org.apache.pinot.core.transport.DataTableHandler.channelRead0(DataTableHandler.java:67) ~[classes/:?]
    	at org.apache.pinot.core.transport.DataTableHandler.channelRead0(DataTableHandler.java:36) ~[classes/:?]
    	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.42.Final.jar:4.1.42.Final]
    	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
    n
    d
    • 3
    • 5
  • m

    Mayank

    07/31/2020, 1:44 AM
    <rant> Is it just me, or Intellij has become a lot more confused loading Pinot (specially when moving between commits), eats up a lot of my time. </rant>
    d
    b
    • 3
    • 2
  • m

    Mayank

    08/03/2020, 3:55 PM
    Controller initializes the PinotFs implementation based on the uri, IIRC. So using local dir will instantiate localFs. I am hoping http:// might pick up HDFS, if not, we should fix.
    y
    • 2
    • 10
  • d

    Dan Hill

    08/04/2020, 6:44 PM
    I'm hitting an issue where the offline data I loaded into does not match the query results from Pinot. I run
    select * from metrics limit 20000
    and I see
    Copy code
    numDocsScanned = 15400
    totalDocs = 17118
    totalDocs matches my raw data size (what I'd expect). The query results match numDocsScanned (which is wrong). I'm happy to share data and schema in a private message.
    m
    • 2
    • 23
  • m

    Mayank

    08/04/2020, 8:13 PM
    Yes multi-valued BYTES are not supported yet, iirc
    o
    • 2
    • 1
  • e

    Elon

    08/05/2020, 12:30 AM
    We are using pinot to store log data and noticed that string columns are truncated at 512 characters. Is there another datatype or setting we should use to increase the length?
    n
    s
    +2
    • 5
    • 60
  • d

    Dan Hill

    08/05/2020, 5:27 AM
    I'm about to start writing a daily mapreduce to prepare segments for offline ingestion. I'm using Flink for the streaming ingestion. Any design tips for using Flink? • Should I have Flink write the files to S3 and then run LaunchDataIngestionJob using a workflow tool? • What's the status of the batch plugins? Does this make it easy to encapsulate the client-side parts of LaunchDataIngestionJob? https://docs.pinot.apache.org/plugins/pinot-batch-ingestion I'm also fine with writing this in Spark if it makes it a lot easier. I'd prefer Flink to keep the implementation consistent.
    k
    • 2
    • 6
  • a

    Apoorva Moghey

    08/05/2020, 11:17 AM
    I am getting this error while running
    RealtimeProvisioningHelperCommand
    n
    • 2
    • 8
  • e

    Elon

    08/05/2020, 11:01 PM
    Also, noticed that we can't do text_match until the text index is created, looks like the latency could be up to 30 mins
    s
    • 2
    • 15
  • e

    Elon

    08/05/2020, 11:41 PM
    For now the workaround is fine for us
    s
    • 2
    • 3
  • s

    Sidd

    08/05/2020, 11:43 PM
    I can try to make a change sometime
    👍 1
    e
    • 2
    • 3
  • p

    Pradeep

    08/10/2020, 7:57 PM
    Hi, When we are running a query individually we are seeing latencies of the order of 3seconds but when we run for example same query parallely we are seeing latencies to be order of 7-10secs for the third query. Do the queries run serially? I have one broker and 2 servers as part of my setup, also would adding more servers be helpful in parallelizing better for optimizing query latencies?
    m
    k
    • 3
    • 15
  • e

    Elon

    08/11/2020, 10:56 PM
    In the docs it just says metric columns "typically" appear in aggregations but in this case dimension columns appear in the aggregations - and we don't see any issue with that. Are there any caveats to aggregating on dimension columns or grouping by metric columns?
    m
    • 2
    • 5
  • d

    Dan Hill

    08/13/2020, 3:38 AM
    Does Pinot have a way to create an alias to a specific table? I'm thinking about the situation where I want to make a large change to a table and I'll need to recreate it. Can I use an alias and do the swap inside Pinot? Or would I want a layer outside of Pinot to convert this alias table name to a specific Pinot table?
    m
    k
    • 3
    • 8
  • a

    Andrew First

    08/14/2020, 7:17 PM
    where i can find logs for each query? trying to troubleshoot slow queries i tried this but not luck for any of the brokers:
    $ kubectl -n pinot logs pinot-broker-6 broker -f
    m
    x
    • 3
    • 46
  • x

    Xiang Fu

    08/14/2020, 7:26 PM
    table is required for fetching required information e.g. schema table configs
    o
    • 2
    • 1
  • p

    Pradeep

    08/15/2020, 12:51 AM
    some more questions on start-tree index, does reload create start-tree for old segments? and how do I verify that start-tree index is generated for the segments? Basically I am not seeing improvement in query times after reload, so wondering if I am missing something
    x
    • 2
    • 3
12345...166Latest