https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • y

    Yash Agarwal

    01/08/2021, 7:00 AM
    If for a table we have set
    nullHandlingEnabled
    as true, and we do distinct count on a column that has nulls, does it filter out the null values and only show count of non null distinct values ?
    k
    • 2
    • 7
  • y

    Yash Agarwal

    01/09/2021, 1:26 PM
    How do i set delete data older than 2 years which can be triggered from outside.
    m
    • 2
    • 4
  • r

    Rohit Gampa

    01/10/2021, 1:17 PM
    data injection stops in a realtime table after segment flush threshold time. When i check, no new segments are created and the status of the one and only segment is shown as consuming.
    k
    • 2
    • 3
  • v

    vmarchaud

    01/12/2021, 4:29 PM
    If that helps we have open-sourced the plugin there: https://github.com/reelevant-tech/pinot-pubsub-plugin
    m
    • 2
    • 7
  • k

    Kishore G

    01/12/2021, 4:30 PM
    That’s expected with high level stream consumer
    v
    s
    • 3
    • 23
  • j

    James.Zhao

    01/13/2021, 8:31 AM
    @Kishore G Hi Kishore Recent days,I try to install pinot cluster according to the online document. But I have some questions in my installation process. First, I don't understand the role of kafka component role in whole pinot cluster. Second, when I try to StartServer paramenter to start pinot server, how do I know if the server is realtime or offline? Could you give me some guide, thank you.
    v
    k
    • 3
    • 10
  • y

    Yash Agarwal

    01/13/2021, 1:25 PM
    When Uploading segments to controller using Segment Uri Push, It is putting the segments in path
    Copy code
    fake_job_for_testing_time_based_OFFLINE/fake_job_for_testing_time_based_OFFLINE_18597_18597_0
    whereas when trying to delete the segments, it is calling exists on path
    Copy code
    fake_job_for_testing_time_based/fake_job_for_testing_time_based_OFFLINE_18597_18597_0
    which logs
    Copy code
    Failed to find local segment file for segment
    k
    x
    • 3
    • 19
  • e

    eywek

    01/14/2021, 1:34 PM
    Hello, I’m having a weird query issue, when I try to query my cluster (via Pinot UI) with:
    Copy code
    SELECT "tmpId" from datasource_5ffdbf421eb80003001818fe
    WHERE "name" = "identify" AND "clientId" = "ef8e0112fbac1450776931712bdaad3bb0deb121"
    GROUP BY "tmpId"
    LIMIT 1
    The query is executed But with:
    Copy code
    SELECT "tmpId" from datasource_5ffdbf421eb80003001818fe
    WHERE "name" = "identify" AND "clientId" = "3f8e0112fbac1450776931712bdaad3bb0deb121" -- 3f8e0112fbac1450776931712bdaad3bb0deb121
    GROUP BY "tmpId"
    LIMIT 1
    I get the following error:
    Copy code
    [
      {
        "errorCode": 200,
        "message": "QueryExecutionError:\norg.antlr.v4.runtime.misc.ParseCancellationException\n\tat org.antlr.v4.runtime.BailErrorStrategy.recoverInline(BailErrorStrategy.java:66)\n\tat org.antlr.v4.runtime.Parser.match(Parser.java:203)\n\tat org.apache.pinot.pql.parsers.PQL2Parser.expression(PQL2Parser.java:828)\n\tat org.apache.pinot.pql.parsers.PQL2Parser.expression(PQL2Parser.java:745)\n\tat org.apache.pinot.pql.parsers.Pql2Compiler.parseToAstNode(Pql2Compiler.java:148)\n\tat org.apache.pinot.pql.parsers.Pql2Compiler.compileToExpressionTree(Pql2Compiler.java:153)\n\tat org.apache.pinot.common.request.transform.TransformExpressionTree.compileToExpressionTree(TransformExpressionTree.java:46)\n\tat org.apache.pinot.broker.requesthandler.BaseBrokerRequestHandler.handleSubquery(BaseBrokerRequestHandler.java:471)\n\tat org.apache.pinot.broker.requesthandler.BaseBrokerRequestHandler.handleRequest(BaseBrokerRequestHandler.java:215)\n\tat org.apache.pinot.broker.api.resources.PinotClientRequest.processSqlQueryPost(PinotClientRequest.java:155)\n\tat sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)"
      }
    ]
    I don’t really understand the error and why it’s happening, the only thing that changes between 2 queries is the
    clientId
    value that starts with
    ef
    in the first query and starts with
    3f
    in the 2nd one
    m
    k
    +2
    • 5
    • 19
  • a

    Amit Chopra

    01/14/2021, 9:08 PM
    Hi, trying to troubleshoot an issue i am facing. I have a K8S cluster setup with 4 server instances. For the server, i changed replicas to 2 and did helm upgrade. Even though the servers in K8S has reduced from 4 to 2, i still see the deleted ones in bad state in pinot UI. Shouldn’t the deleted servers from K8S be deleted from pinot as well? Secondly, the problem i am facing is that 2 of the segments are mapped to the deleted servers. And now it is not allowing me to drop the server instances manually too. And those 2 segments too are in bad state. Ideas?
    n
    x
    • 3
    • 8
  • t

    troywinter

    01/19/2021, 6:39 AM
    Copy code
    java.lang.RuntimeException: Caught exception while initializing ControllerFilePathProvider
    	at org.apache.pinot.controller.ControllerStarter.initControllerFilePathProvider(ControllerStarter.java:489) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.ControllerStarter.setUpPinotController(ControllerStarter.java:330) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.ControllerStarter.start(ControllerStarter.java:287) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:116) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:91) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:234) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:286) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:233) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:183) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:130) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:164) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:184) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    Caused by: org.apache.pinot.controller.api.resources.InvalidControllerConfigException: Caught exception while initializing file upload path provider
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:107) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.ControllerStarter.initControllerFilePathProvider(ControllerStarter.java:487) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	... 11 more
    Caused by: java.lang.IllegalStateException: Data directory: <hdfs://xxx:8020/pinot/controller> must be a directory
    	at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:518) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:73) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	at org.apache.pinot.controller.ControllerStarter.initControllerFilePathProvider(ControllerStarter.java:487) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-8085fb76f8a70eb5bc6326850fcc735fe955d98b]
    	... 11 more
    x
    • 2
    • 6
  • n

    Neer Shay

    01/19/2021, 8:08 PM
    thanks for the feedback @Kishore G & @Xiang Fu. The file is accessible via http but I guess I'll find another method to get it ingested
    x
    • 2
    • 1
  • t

    troywinter

    01/20/2021, 3:49 AM
    Anyone know the reason for this WARNING, HK2 service reification failed for [javax.servlet.ServletConfig] with an exception, is this just a warning we can safely ignore it ?
    x
    • 2
    • 3
  • v

    Varun Srivastava

    01/20/2021, 7:53 AM
    1. Can a normal table (non-upsert) have primnary key => then only defining "primaryKeyColumns": ["event_id"] in schema should be fine ??? 2. For upsert table if we have composit primary key like - "primaryKeyColumns": ["event_id", "eventName"] . And kafka partition key is like event_id (made of only one primary key field). Should it be fine ?
    y
    • 2
    • 1
  • n

    Neer Shay

    01/20/2021, 9:16 AM
    Hi, is there a way to configure the s3 endpoint in the plugin (https://docs.pinot.apache.org/basics/data-import/pinot-file-system/amazon-s3)? The doc only describes configuration for region, access keys, and ACL
    x
    • 2
    • 6
  • m

    Matt

    01/20/2021, 2:52 PM
    Hello, What are the basic steps to troubleshoot a cluster. My cluster status sometimes shows Bad in UI and recovers quickly. However search and ingestion are working. No issues with CPU or Memory . Also all logs looks ok ,other than few errors due to bad query searches. So how to check whether everything is all-right?
    w
    k
    • 3
    • 21
  • v

    vmarchaud

    01/22/2021, 1:30 PM
    Hey, quick question: we have realtime segment marked as completed and we would like to move it to a offline table however the endpoint to download the segment (
    get /segments/{tableName}/{segmentName}
    )is trying to fetch it from the deep store. I was just thinking of downloading it and upload it on the offline table directly, how could i achieve this ? Thanks
    ✅ 1
    w
    n
    • 3
    • 26
  • w

    Will Briggs

    01/22/2021, 4:56 PM
    I’m running into a situation where Pinot is using a star-tree index to satisfy a query in one case, but not in another, and the queries are almost identical. This one does not use the star tree:
    Copy code
    SELECT dimension, SUM(metric) AS totalMetrics FROM myTable WHERE otherDimension='filterValue' AND eventTimestamp >= cast(now() - 172800000 as long) GROUP BY 1 ORDER BY 2 DESC LIMIT 10
    This one uses the star tree:
    Copy code
    SELECT dimension, SUM(metric) AS totalMetrics FROM myTable WHERE otherDimension='filterValue' AND eventTimestamp >= 1611161288000 GROUP BY 1 ORDER BY 2 DESC LIMIT 10
    It looks like the use of a dynamically-computed timestamp value is confusing the optimizer somehow? the
    eventTimestamp
    column is not part of my star-tree index in either case.
    m
    j
    • 3
    • 35
  • m

    Matt

    01/22/2021, 5:39 PM
    Hello, is there any way to delete the tableconfig without deleting segments and create the same table with the same segments from the disk for realtime? I happen to execute a wrong clusterconfig rest call and broke the broker UI . I tried updating it again but no luck. So planning to recreate the tableconfig without losing data.
    m
    j
    s
    • 4
    • 16
  • e

    Elon

    01/23/2021, 1:28 AM
    Does anyone here impose a limit on
    pinot.broker.query.response.limit
    - we are thinking to limit to 1k, and were wondering what other pinot installations use.
    j
    x
    • 3
    • 4
  • k

    Ken Krugler

    01/23/2021, 6:38 PM
    We’ve run into an issue with loading segments on our server, where we need more direct memory when building inverted indexes. The stack trace looks like:
    Copy code
    Caused by: java.lang.OutOfMemoryError: Direct buffer memory
    	at java.nio.Bits.reserveMemory(Bits.java:694) ~[?:1.8.0_275]
    	at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[?:1.8.0_275]
    	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_275]
    	at org.apache.pinot.core.segment.memory.PinotByteBuffer.allocateDirect(PinotByteBuffer.java:38) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.memory.PinotDataBuffer.allocateDirect(PinotDataBuffer.java:116) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.creator.impl.inv.OffHeapBitmapInvertedIndexCreator.createTempBuffer(OffHeapBitmapInvertedIndexCreator.java:254) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.creator.impl.inv.OffHeapBitmapInvertedIndexCreator.seal(OffHeapBitmapInvertedIndexCreator.java:152) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.index.loader.invertedindex.InvertedIndexHandler.createInvertedIndexForColumn(InvertedIndexHandler.java:125) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.index.loader.invertedindex.InvertedIndexHandler.createInvertedIndices(InvertedIndexHandler.java:73) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.segment.index.loader.SegmentPreProcessor.process(SegmentPreProcessor.java:109) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.indexsegment.immutable.ImmutableSegmentLoader.load(ImmutableSegmentLoader.java:99) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.core.data.manager.offline.OfflineTableDataManager.addSegment(OfflineTableDataManager.java:52) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.server.starter.helix.HelixInstanceDataManager.addOfflineSegment(HelixInstanceDataManager.java:122) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    	at org.apache.pinot.server.starter.helix.SegmentFetcherAndLoader.addOrReplaceOfflineSegment(SegmentFetcherAndLoader.java:116) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21]
    We’ve bumped direct memory to 5gb, but still hitting these exceptions. We can continue increasing (or create the inverted index while building the segments), but wanted to confirm that it’s expected to need a lot of direct memory like this, thanks.
    x
    • 2
    • 1
  • b

    Buchi Reddy

    01/24/2021, 12:53 AM
    Hey folks, a bunch of segments are in BAD state in Pinot and reloading all the segments from the UI didn’t help. Trying to debug it further but didn’t see any warnings or errors in the logs. Any hints on where to check? Any usual suspects?
    m
    x
    k
    • 4
    • 39
  • e

    Elon

    01/26/2021, 2:36 AM
    I can't seem to select a virtual column from the pinot query console, is it supported? i.e.
    Copy code
    select $segmentName from <table> limit 10
    k
    j
    • 3
    • 11
  • k

    Ken Krugler

    01/26/2021, 3:16 PM
    I’m trying to use the map-reduce job to build segments. In HadoopSegmentGenerationJobRunner.packPluginsToDistributedCache, there’s this code:
    Copy code
    File pluginsTarGzFile = new File(PINOT_PLUGINS_TAR_GZ);
          try {
            TarGzCompressionUtils.createTarGzFile(pluginsRootDir, pluginsTarGzFile);
          } catch (IOException e) {
            LOGGER.error("Failed to tar plugins directory", e);
            throw new RuntimeException(e);
          }
          job.addCacheArchive(pluginsTarGzFile.toURI());
    This creates a
    pinot-plugins.tar.gz
    file in the Flink distribution directory, which is on my server. But as the Hadoop DistributedCache documentation states, “The 
    DistributedCache
     assumes that the files specified via urls are already present on the 
    FileSystem
     at the path specified by the url and are accessible by every machine in the cluster.”
    k
    x
    • 3
    • 53
  • m

    Matt

    01/26/2021, 10:11 PM
    All Pinot Server pods keeps crashing with following error. Anyone came across this before?
    Copy code
    [Times: user=0.02 sys=0.00, real=0.00 secs]
    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  SIGBUS (0x7) at pc=0x00007f104649b6ff, pid=1, tid=0x00007ee665d06700
    #
    # JRE version: OpenJDK Runtime Environment (8.0_282-b08) (build 1.8.0_282-b08)
    # Java VM: OpenJDK 64-Bit Server VM (25.282-b08 mixed mode linux-amd64 compressed oops)
    # Problematic frame:
    # C  [libc.so.6+0x15c6ff]
    #
    # Core dump written. Default location: /opt/pinot/core or core.1
    #
    # An error report file with more information is saved as:
    # /opt/pinot/hs_err_pid1.log
    d
    • 2
    • 43
  • t

    troywinter

    01/27/2021, 3:59 AM
    Hi team, I’m getting a class not found exception when doing a SegmentCreationAndUriPush job, the
    org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    class cannot be found, below is my job config:
    Copy code
    executionFrameworkSpec:
      name: 'standalone'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
    jobType: SegmentCreationAndUriPush
    inputDirURI: '/root/fetrace_biz/data/'
    includeFileNamePattern: 'glob:**/*'
    outputDirURI: '<hdfs://pinot/controller/fetrace_biz/>'
    overwriteOutput: true
    pinotFSSpecs:
      - scheme: hdfs
        className: org.apache.pinot.plugin.filesystem.HadoopPinotFS
        configs:
          hadoop.conf.path: '/opt/hdfs/'
      - scheme: file
        className: org.apache.pinot.spi.filesystem.LocalPinotFS
    recordReaderSpec:
      dataFormat: 'csv'
      className: 'org.apache.pinot.plugin.inputformat.json.JSONRecordReader'
    tableSpec:
      tableName: 'fetrace_biz'
      schemaURI: '<http://10.168.0.88:31645/tables/fetrace_biz/schema>'
      tableConfigURI: '<http://10.168.0.88:31645/tables/fetrace_biz>'
    pinotClusterSpecs:
      - controllerURI: '<http://10.168.0.88:31645>'
    exception stack is:
    Copy code
    2021/01/27 03:53:03.942 ERROR [PinotAdministrator] [main] Exception caught:
    java.lang.RuntimeException: Failed to create IngestionJobRunner instance for class - org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:137) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:117) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:123) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:164) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:184) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    Caused by: java.lang.ClassNotFoundException: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    	at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_275]
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_275]
    	at org.apache.pinot.spi.plugin.PluginClassLoader.loadClass(PluginClassLoader.java:80) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:293) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:264) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:245) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:135) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-255202ec4fc7df2283f7c275d8e9025a26cf3274]
    	... 4 more
    k
    x
    • 3
    • 32
  • h

    Harold Lim

    01/27/2021, 10:23 PM
    Hi. I'm trying to follow the steps here: https://docs.pinot.apache.org/basics/data-import/pinot-stream-ingestion/import-from-apache-kafka Does the pinot schema for the corresponding kafka topic need to exactly match? Does Pinot support flattening the data? Currently, we have messages in Kafka in json-format. I'm looking at setting up Pinot to ingest data from this topic. The dimensions are currently nested inside a "labels" dictionary of the Kafka message.
    w
    k
    +2
    • 5
    • 15
  • s

    Suraj

    01/28/2021, 6:55 PM
    Our brokers have been running into direct memory allocation OOM errors. We have allocated 128M. Noticed that the brokers don't crash but catch the exception and log it. The only symptom we see is query timeouts. Would like to understand: a) what is the direct memory used for ? b) any guidelines to size it ?
    k
    l
    • 3
    • 4
  • k

    Kishore G

    01/29/2021, 7:32 PM
    Can you please post this in an issue?
    l
    • 2
    • 1
  • k

    Kishore G

    01/29/2021, 7:34 PM
    Are we setting that JVM flag in our helm chart or it’s something you had on your side?
    l
    • 2
    • 2
  • r

    Ravi Teja Kanumula

    01/29/2021, 8:44 PM
    Hi Folks, I’m a newbie on pinot.. trying to add a new dependency to support service principal based access to ADL gen2
    Copy code
    <dependency>
          <groupId>com.azure</groupId>
          <artifactId>azure-identity</artifactId>
          <version>1.2.2</version>
        </dependency>
    But It’s giving convergence errors
    Copy code
    [WARNING] 
    Dependency convergence error for org.codehaus.woodstox:stax2-api:3.1.4 paths to dependency are:
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.azure:azure-core:1.12.0
          +-com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.9.8
            +-org.codehaus.woodstox:stax2-api:3.1.4
    and
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.azure:azure-core:1.12.0
          +-com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.9.8
            +-com.fasterxml.woodstox:woodstox-core:5.0.3
              +-org.codehaus.woodstox:stax2-api:3.1.4
    and
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-org.linguafranca.pwdb:KeePassJava2:2.1.4
          +-org.linguafranca.pwdb:KeePassJava2-simple:2.1.4
            +-com.fasterxml:aalto-xml:1.0.0
              +-org.codehaus.woodstox:stax2-api:4.0.0
    
    [WARNING] 
    Dependency convergence error for com.nimbusds:oauth2-oidc-sdk:7.4 paths to dependency are:
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.microsoft.azure:msal4j:1.8.0
          +-com.nimbusds:oauth2-oidc-sdk:7.4
    and
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.nimbusds:oauth2-oidc-sdk:7.1.1
    
    [WARNING] 
    Dependency convergence error for com.microsoft.azure:msal4j:1.8.0 paths to dependency are:
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.microsoft.azure:msal4j:1.8.0
    and
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.microsoft.azure:msal4j-persistence-extension:1.0.0
          +-com.microsoft.azure:msal4j:1.4.0
    
    [WARNING] 
    Dependency convergence error for net.java.dev.jna:jna-platform:5.5.0 paths to dependency are:
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-com.microsoft.azure:msal4j-persistence-extension:1.0.0
          +-net.java.dev.jna:jna-platform:5.5.0
    and
    +-org.apache.pinot:pinot-adls:0.7.0-SNAPSHOT
      +-com.azure:azure-identity:1.2.2
        +-net.java.dev.jna:jna-platform:5.6.0
    
    [WARNING] Rule 1: org.apache.maven.plugins.enforcer.DependencyConvergence failed with message:
    Failed while enforcing releasability. See above detailed error message.
    I tried few different things but none works What do we do in this case ? Thank you
    x
    w
    +2
    • 5
    • 12
1...789...166Latest