https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • d

    Danko Andruszkiw

    06/12/2023, 1:55 PM
    hi all trying to setup HDFS deep store , h already have a Hadoop cluster up and ruining and i want to use that as a store for Pinot, having problems with the config this is what i have currently in my server config , but don't see any notes of how to configure to talk to a separate hadoop cluster as the example config seams to point to a local hadoop install ???
    Copy code
    # Deep Store = WIP
    realtime.segment.serverUploadToDeepStore = true
    pinot.server.instance.segment.store.uri=<URI of segment store> <<hdfs://hdfs-svr01/data/hdfs/nn>> <=== or does this need to be local???
    pinot.server.instance.enable.split.commit=true
    pinot.server.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS
    
    pinot.server.storage.factory.hdfs.hadoop.conf.path=/etc/hadoop/conf <--- does this mean need hadoop or hadoop client installed on each server node???
    pinot.server.segment.fetcher.protocols=file,http,hdfs
    pinot.server.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
    
    pinot.server.segment.fetcher.hdfs.hadoop.kerberos.principle=<your kerberos principal> <=== hadoop security needed ???
    pinot.server.segment.fetcher.hdfs.hadoop.kerberos.keytab=<your kerberos keytab> <=== hadoop security needed ???
    pinot.server.grpc.enable=true
    pinot.server.grpc.port=8090
    s
    • 2
    • 4
  • a

    abhinav wagle

    06/12/2023, 7:22 PM
    Whats the right way to remove a Pinot Server. The underlying POD is already removed. But I try
    Drop
    instance on UI, I see below error :
    Copy code
    Failed to drop instance. Failed to drop instance Server_pinot-dev-server-19 - Instance Server_pinot-dev-server-19.pinot-dev-server-headless.de-nrt-pinot.svc.cluster.local_8098 exists in ideal state for table1_OFFLINE
    m
    • 2
    • 1
  • p

    parth

    06/12/2023, 9:04 PM
    hi all! I feel our pinot cluster is in an inconsistent state. we changed the number of servers and brokers and some instances now appear as dead on the dashboard. the controllers continuously throw "Failed to update the segment lineage" logs. if we have reached an inconsistent state where the cluster cannot find some segments, how do we fix the cluster? Thanks!
    m
    m
    • 3
    • 3
  • a

    abhinav wagle

    06/13/2023, 2:01 AM
    Hello, for one of my aggregation query, I see a spike in pinot_server_memory_directBufferUsage_Value and see a Pod restart after that. Whats the right way to add more directbufferMemory, is it by bumping this -XX:MaxDirectMemorySize in jvmOpts ? Also is there is guidance around this.
    • 1
    • 1
  • e

    Ehsan Irshad

    06/13/2023, 3:20 AM
    Hi Team. Is there a way to Rebalance Servers with bounded resources ... Specially in production while we are serving queries, this caused CPU to be utilised 100% for upto 30 mins or so.
    m
    • 2
    • 3
  • d

    Deena Dhayalan

    06/13/2023, 11:14 AM
    Does we have support for temp directory to point to any other file system other than local file system? Options like HADOOP
    m
    • 2
    • 8
  • t

    Tommaso Peresson

    06/13/2023, 5:03 PM
    Hey while ingesting servers went in a crash loop. The cause is
    Copy code
    Added or replaced segment: 265f53d53e8fe849ea0b245c6ed3ae5d_gz of table: TEST_HOURLY_OFFLINE
    # There is insufficient memory for the Java Runtime Environment to continue
    # Native memory allocation (malloc) failed to allocate 2097152 bytes for AllocateHeap
    JAVA_OPTS for the servers are:
    -Xms15G -Xmx30G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ActiveProcessorCount=40 -Djute.maxbuffer=10000000 -Xlog:gc*:file=/opt/pinot/gc-pinot-server.log
    . I’m running Pinot in a GKE cluster. This happens when the server tries to bring back ONLINE OFFLINE segments. My total number of segments is around ~100k * 2.
    h
    m
    • 3
    • 24
  • a

    Ashwin Raja

    06/13/2023, 7:21 PM
    hi! we're seeing something strange with timestamp indices and segment pruning. We have a timestamp index like this:
    Copy code
    {
            "name": "blockTimestamp",
            "encodingType": "DICTIONARY",
            "indexType": "TIMESTAMP",
            "indexTypes": [
              "TIMESTAMP"
            ],
            "timestampConfig": {
              "granularities": [
                "SECOND",
                "MINUTE",
                "HOUR",
                "DAY",
                "WEEK",
                "MONTH",
                "YEAR"
              ]
            },
            "indexes": null
          },
    What we're trying to do is count the number of rows in a single day, in our case 2021-05-11. So we do a query like this, with a date range:
    Copy code
    select $segmentName, DATETRUNC('day', "blockTimestamp") "blockTimestamp_day", count("transactionFrom") "Count Transaction From Address" from "13059af7-8eab-4196-a7ea-1a170d73c02e" where blockTimestamp_day >= fromDateTime('2021-05-11', 'yyyy-MM-dd') and blockTimestamp_day < fromDateTime('2021-05-12', 'yyyy-MM-dd') group by "blockTimestamp_day", $segmentName order by "blockTimestamp_day"
    We get these results, which look correct:
    Copy code
    13059af7-8eab-4196-a7ea-1a170d73c02e_OFFLINE_1620454828000_1620702391000_61_2296cf5b-2896-4ff8-bb57-0405bd69a7dc	2021-05-11 00:00:00.0	34699
    13059af7-8eab-4196-a7ea-1a170d73c02e_OFFLINE_1620702401000_1620964585000_62_9f7f4c0f-5d06-4909-99c0-7c009ff53385	2021-05-11 00:00:00.0	247050
    But if instead, we change this to do a
    =
    query, which should be exactly the same results, we only pick up _*1 segment*_:
    Copy code
    13059af7-8eab-4196-a7ea-1a170d73c02e_OFFLINE_1620702401000_1620964585000_62_9f7f4c0f-5d06-4909-99c0-7c009ff53385	2021-05-11 00:00:00.0	247050
    Clearly, the first query is correct and the 2nd is incorrect, since we're not picking up segments/values that we should be?
    m
    • 2
    • 8
  • e

    eywek

    06/14/2023, 1:49 PM
    Hello, I was wondering if there is any way (or a preferred hack) to store the result of a query to another Pinot table? I'm trying to pre-compute some heavy queries and I would like to directly store the result into a Pinot table thank you
    🌟 1
    k
    x
    • 3
    • 18
  • a

    Alexander Vivas

    06/14/2023, 1:53 PM
    Hey guys, good afternoon. I am trying to setup a new realtime table and preload using the segments generated by another realtime table, is this possible without going into a hybrid table? If so where can I find the docs for this? (By the way, the new table would have some columns the previous table does not, although they share all the other columns) Thanks a lot!
    h
    m
    • 3
    • 4
  • s

    Santosh Kumar Sharma

    06/14/2023, 6:02 PM
    Hi All, I am trying to ingest parquet files very similar to one mentioned here, but using
    standalone
    framework and simple
    file
    scheme. but I am getting below exception
    java.lang.RuntimeException: Failed to create IngestionJobRunner instance for class - null
    details below
    Copy code
    Tarring segment from: /var/folders/p9/0nhr33vd7m90vtvzzbx9t2x80000gn/T/pinot-056eaf50-cedc-4f14-9390-b083641c2622/output/events_OFFLINE_1633114228000_1633114232000_0 to: /var/folders/p9/0nhr33vd7m90vtvzzbx9t2x80000gn/T/pinot-056eaf50-cedc-4f14-9390-b083641c2622/output/events_OFFLINE_1633114228000_1633114232000_0.tar.gz
    Size for segment: events_OFFLINE_1633114228000_1633114232000_0, uncompressed: 3.66K, compressed: 1.29K
    Trying to create instance for class null
    Got exception to kick off standalone data ingestion job - 
    java.lang.RuntimeException: Failed to create IngestionJobRunner instance for class - null
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:145) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:130) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:130) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    Caused by: java.lang.NullPointerException
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:320) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:306) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:143) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	... 13 more
    java.lang.RuntimeException: Failed to create IngestionJobRunner instance for class - null
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:145)
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:130)
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:130)
    	at org.apache.pinot.tools.Command.call(Command.java:33)
    	at org.apache.pinot.tools.Command.call(Command.java:29)
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
    	at picocli.CommandLine.access$1300(CommandLine.java:145)
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352)
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346)
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311)
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
    	at picocli.CommandLine.execute(CommandLine.java:2078)
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171)
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202)
    Caused by: java.lang.NullPointerException
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:320)
    	at org.apache.pinot.spi.plugin.PluginManager.createInstance(PluginManager.java:306)
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:143)
    	... 13 more
    any help will be greatly appreciated, TIA.
    m
    • 2
    • 4
  • r

    Raveendra Yerraguntla

    06/16/2023, 2:10 AM
    My first attempt of “delete from table where id=null “ criteria ended in parser error . What is smart way of deleting millions of rows based on a filtering criteria from sql manager ?
    m
    • 2
    • 4
  • l

    Lee Wei Hern Jason

    06/18/2023, 10:32 AM
    Hi Team, we are facing Java.OutOfMemoryError: Java heap space on our offline servers (servers that only holds COMPELTED segments). We realised that this occurs whenever we performa a rebalance and the java heap memory hits 100% and throws the OOM error. From the docs, looks like the Heap is used for metadata management and query processing. We don’t think that metadata management will takes up alot of Heap. Is there any reason why rebalancing will cause heap to OOM ?
    m
    s
    m
    • 4
    • 12
  • p

    Pham Duong

    06/19/2023, 4:34 AM
    Pinot version: 0.12.1 Java JDK: 11 I tried to ingestion data realtime from kafka to pinot with 2 cases: Case 1: It work like charm {"event":{"header": "v1","body":{"gender":"M", "region":"London"}}} Case 2: Error But when i use data contain some special character like this "Â" (Vietnamese word) {"event":{"header": "v1","body":{"gender":"M", "region":"Âmerica"}}} I faced an error org.apache.pinot.shaded.com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x96 It's seem problem cause by com.fasterxml.jackson library. How to resolve it? Thank you so much
    m
    • 2
    • 1
  • m

    Mingmin Xu

    06/19/2023, 5:00 PM
    Hello team, any settings to build Pinot project on Mac with M1 chip? It passes with
    mvn clean install -DskipTests
    however some tests in
    pinot-segment-spi, pinot-segment-local, pinot-core
    cannot pass with error like below.
    Copy code
    mvn clean install -Pbin-dist
    
    java.lang.UnsatisfiedLinkError: 'long xerial.larray.impl.LArrayNative.mmap(long, int, long, long)'
            at xerial.larray.impl.LArrayNative.mmap(Native Method)
            at xerial.larray.mmap.MMapBuffer.<init>(MMapBuffer.java:94)
    I suspect it's related with xerial/larray which doesn't mention MacM1/2 support, don't know how to find a workaround.
    m
    • 2
    • 3
  • e

    Ehsan Irshad

    06/20/2023, 5:51 AM
    Hi Team We need some help in trying to understand the logs from server which results in a lot of exceptions. Would also like to understand what can we do to mitigate the issue. Logs attached in thread.
    h
    e
    l
    • 4
    • 12
  • j

    Jessica Stewart

    06/20/2023, 8:21 PM
    Does anyone know how to prevent errors from the API from truncating? Example below
    Copy code
    Invalid table config: app_analytics_enterprise with error: Missing required creator property 'tableType' (index 1)
     at [Source: (String)"{"tableName": "app_analytics_enterprise", "offline": {"tableName": "app_analytics_enterprise_OFFLINE", "tableType": "OFFLINE", "segmentsConfig": {"timeType": "DAYS", "schemaName": "app_analytics_enterprise", "retentionTimeUnit": "DAYS", "retentionTimeValue": "400", "replication": "1", "timeColumnName": "date_str", "minimizeDataMovement": true, "segmentPushType": "APPEND"}, "tenants": {"broker": "DefaultTenant", "server": "DefaultTenant"}, "tableIndexConfig": {"nullHandlingEnabled": true, "invert"[truncated 3445 chars]; line: 1, column: 3945] (through reference chain: org.apache.pinot.spi.config.table.TableConfig["tableType"])
    m
    m
    • 3
    • 9
  • j

    Jatin

    06/21/2023, 7:11 AM
    Hey Guys , Now i want to ingest parquet from s3 (as of now reading csv) for that i made changes wherever csv replace with parquet ,but data is not ingesting into pinot table , when i check minion task manager under SegmentGenerationAndPushTask-->Task-->Subtask 1) info -->Exception happened in running task: org/apache/hadoop/fs/Path , and 2) Progress No status from worker: Minion_00.00.00.555_0011. Got status: TASK_ERROR from Helix. What can be the issue ?
    k
    • 2
    • 1
  • j

    Johan Venant

    06/21/2023, 12:58 PM
    Hi, I added some columns to one of my schema and then reloaded all the segments (REALTIME table, not consuming segments). But still pinot query still complains with the message : There are xxx invalid segment/s. This usually means that they were created with older schema. Did I miss something ?
    k
    m
    w
    • 4
    • 20
  • m

    Mike Beyer

    06/22/2023, 1:00 AM
    Trying to upload a segment using docker but getting null pointer exception when it tries to create tar and push:
    m
    m
    • 3
    • 20
  • e

    Ehsan Irshad

    06/22/2023, 6:00 AM
    Hi Does anyone know how to pass headers to QueryRunner? Like authentication token? https://github.com/apache/pinot/blob/bad7106d5c714edf9d52b63dd4428d15cabd79c8/pinot-tools/src/main/java/org/apache/pinot/tools/perf/QueryRunner.java
  • a

    Alexander Vivas

    06/22/2023, 9:25 AM
    Hey guys, good morning I am trying to ingest data into a REALTIME table through some csv files we have stored in a Google Cloud Bucket, so far my job-spec.yaml looks like this:
    Copy code
    # executionFrameworkSpec: Defines ingestion jobs to be running.
    executionFrameworkSpec:
      name: 'standalone'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
      segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner'
      
    jobType: SegmentCreationAndMetadataPush
    inputDirURI: 'gs://$BUCKET_NAME/bigquery-loads/'
    includeFileNamePattern: 'glob:**/*.csv'
    outputDirURI: 'gs://$BUCKET_NAME/controller/data/'
    overwriteOutput: true
    segmentCreationJobParallelism: 4
    
    pinotFSSpecs:
    
      - className: org.apache.pinot.plugin.filesystem.GcsPinotFS
        configs:
          projectId: '$GOOGLE_PROJECT_NAME'
          gcpKey: '/opt/pinot/deployment/gcs/key.json'
    
    recordReaderSpec:
      dataFormat: 'csv'
      className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'
      configClassName: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'
    
    tableSpec:
      tableName: 'events'
      schemaURI: '<http://test-pinot-controller:9000/tables/events_REALTIME/schema>'
      tableConfigURI: '<http://test-pinot-controller:9000/tables/events_REALTIME>'
    
    pinotClusterSpecs:
      - controllerURI: '<http://test-pinot-controller:9000>'
    
    pushJobSpec:
      pushParallelism: 4
      pushAttempts: 2
      pushRetryIntervalMillis: 1000
      copyToDeepStoreForMetadataPush: false
    But then I see this error when executing the ingestion job:
    Copy code
    ERROR [LaunchDataIngestionJobCommand] [main] Got exception to kick off standalone data ingestion job - 
    ava.lang.RuntimeException: Failed to decode table config from JSON - '{"REALTIME":{"tableName":"events","tableType":"REALTIME","segmentsConfig":{...} ... '
    	at org.apache.pinot.common.segment.generation.SegmentGenerationUtils.getTableConfig(SegmentGenerationUtils.java:146) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner.init(SegmentGenerationJobRunner.java:158) ~[pinot-batch-ingestion-standalone-0.12.0-shaded.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:148) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:129) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:130) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    Caused by: org.apache.pinot.shaded.com.fasterxml.jackson.databind.exc.MismatchedInputException: Missing required creator property 'tableName' (index 0)
     at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: org.apache.pinot.spi.config.table.TableConfig["tableName"])
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1615) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.impl.PropertyValueBuffer._findMissing(PropertyValueBuffer.java:194) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.impl.PropertyValueBuffer.getParameters(PropertyValueBuffer.java:160) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:288) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:202) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:520) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:362) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:195) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:2033) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1669) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.utils.JsonUtils.jsonNodeToObject(JsonUtils.java:216) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.common.segment.generation.SegmentGenerationUtils.getTableConfig(SegmentGenerationUtils.java:144) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	... 15 more"
    s
    m
    • 3
    • 14
  • v

    Vivardhan Devaki

    06/22/2023, 11:03 AM
    Hello team, I am working on a custom plugin for decoder (that implements the StreamMessageDecoder interface) in order to 'flatten' one record into multiple records. As a POC I have created a simple implementation of this plugin and I am trying to install this plugin in our Pinot instance (0.10.0). Our Pinot instance is setup on Kubernetes, and we pull the plugin JAR from a nexus repository into the /opt/pinot/plugins directory of the Pinot server. But when I try to do this I get the following error from the Pinot server:
    Copy code
    WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
    ERROR StatusLogger Unrecognized format specifier [d]
    ERROR StatusLogger Unrecognized conversion specifier [d] starting at position ERROR StatusLogger Unrecognized format specifier [thread]
    ERROR StatusLogger Unrecognized conversion specifier [thread] starting at position ERROR StatusLogger Unrecognized format specifier [level]
    ERROR StatusLogger Unrecognized conversion specifier [level] starting at position ERROR StatusLogger Unrecognized format specifier [logger]
    ERROR StatusLogger Unrecognized conversion specifier [logger] starting at position ERROR StatusLogger Unrecognized format specifier [msg]
    ERROR StatusLogger Unrecognized conversion specifier [msg] starting at position ERROR StatusLogger Unrecognized format specifier [n]
    ERROR StatusLogger Unrecognized conversion specifier [n] starting at position ERROR StatusLogger Reconfiguration failed: No configuration found for 'Default' at 'null' in 'null'
    Exception in thread "main" java.lang.ExceptionInInitializerError
    	at org.apache.pinot.tools.admin.command.StartKafkaCommand.<init>(StartKafkaCommand.java:
    	at org.apache.pinot.tools.admin.PinotAdministrator.<clinit>(PinotAdministrator.java:
    Caused by: java.util.NoSuchElementException
    	at java.base/java.util.ServiceLoader$
    	at java.base/java.util.ServiceLoader$
    	at java.base/java.util.ServiceLoader$
    	at org.apache.pinot.tools.utils.KafkaStarterUtils.getKafkaConnectorPackageName(KafkaStarterUtils.java:	at org.apache.pinot.tools.utils.KafkaStarterUtils.<clinit>(KafkaStarterUtils.java:	... 2 more
    I verified that the JAR is being pulled correctly from the nexus repository by mouting it into /opt/pinot/data directory instead, where I am able to see the JAR. Also I have tried to pull both the versions of the JARs - with dependencies and without dependencies. Not sure what I am missing here.
    m
    • 2
    • 2
  • m

    Mike Beyer

    06/22/2023, 5:19 PM
    @Mayank -- think that the streaming ingestion example needs to be updated: the CLI for Kafka is in
    ./usr/bin/
    directory, not
    /opt/kafka/bin/
    h
    m
    • 3
    • 4
  • m

    Mike Beyer

    06/22/2023, 5:24 PM
    I also need to call it w/o the .sh suffix
  • m

    Mike Beyer

    06/22/2023, 5:32 PM
    finally -- --zookeeper is deprecated: this line worked for me:
    docker exec -t main-kafka-1 /usr/bin/kafka-topics --bootstrap-server localhost:9092 --create --topic transcript-topic --partitions 1 --replication-factor 1
    h
    • 2
    • 2
  • s

    Sonit Rathi

    06/23/2023, 5:50 AM
    Hi team, one of the partitions is not creating segments in pinot. have tried pause/resume, force commit, rebalance. there are 3 partitions for the topic and it is consuming only for two. Please help
    s
    m
    • 3
    • 12
  • a

    Abhishek Dubey

    06/23/2023, 6:45 AM
    Hi Team, we’re using same schema in Pinot-production for 2 use-cases and one of them is external customer facing and hence critical. While the other one is internal, it faces frequent addition of attributes and history loads to schema and hence data refresh happens every month. During data refresh, data gets loaded into older partitions and if other use-case is referring those partitions, it can create issues even though use-case1 doesn’t require those new attributes or history. How can we ensure the uninterrupted data availability (without downtime) to use-case1 when data refresh happens for use-case2 ? any read replica possible ?
    m
    s
    • 3
    • 2
  • u

    전이섭

    06/23/2023, 9:28 AM
    Hi Team. An error is occurring when creating a segment in an offline table(using spark3). I would like to get some tips to solve the problem below.
    Copy code
    java.lang.IllegalStateException: Forward index disabled column COLUMN_NAME must have a dictionary
    I have some dimension fields as strings and I am getting an error for these dimension fields. I haven’t created any indexes for testing. As far as I know, if I don’t set any index, the forward index is automatically created by default. However, according to the error message, Forward index is disabled. Why? How can i solve this error? Table Configuration
    Copy code
    {
      "OFFLINE": {
        "tableName": "xx_OFFLINE",
        "tableType": "OFFLINE",
        "segmentsConfig": {
          "schemaName": "xx",
          "replication": "1",
          "replicasPerPartition": "1",
          "timeColumnName": "order_created_at",
          "segmentPushFrequency": "HOURLY",
          "segmentPushType": "APPEND",
          "minimizeDataMovement": false
        },
        "tenants": {
          "broker": "DefaultTenant",
          "server": "DefaultTenant"
        },
        "tableIndexConfig": {
          "invertedIndexColumns": [],
          "noDictionaryColumns": [],
          "enableDynamicStarTreeCreation": false,
          "aggregateMetrics": false,
          "nullHandlingEnabled": false,
          "optimizeDictionary": false,
          "optimizeDictionaryForMetrics": false,
          "noDictionarySizeRatioThreshold": 0,
          "rangeIndexColumns": [],
          "rangeIndexVersion": 2,
          "autoGeneratedInvertedIndex": false,
          "createInvertedIndexDuringSegmentGeneration": false,
          "sortedColumn": [],
          "bloomFilterColumns": [],
          "loadMode": "MMAP",
          "onHeapDictionaryColumns": [],
          "varLengthDictionaryColumns": [],
          "enableDefaultStarTree": false
        },
        "metadata": {},
        "quota": {},
        "routing": {},
        "query": {},
        "ingestionConfig": {
          "segmentTimeValueCheck": true,
          "continueOnError": false,
          "rowTimeValueCheck": false
        },
        "isDimTable": false
      }
    }
    m
    c
    c
    • 4
    • 7
  • s

    Soo

    06/23/2023, 10:45 AM
    Hello, anyone recently built the tableau connector? I would like to know what are the jars we need copy? I tried few options based on some discussions in this channel but unable to make it work. I get the following error. I am using tableau 2023.1 and latest pinot should be 0.12
    Copy code
    Bad Connection: Tableau could not connect to the data source. Error Code: 1CA83880
    m
    m
    • 3
    • 7
1...838485...166Latest