https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • a

    Alice

    09/27/2022, 1:34 PM
    Hi team, I have an issue. Could you help? Pinot stopped consuming kafka data. While there’s new messages coming into kafka.
    Copy code
    2022/09/27 13:20:48.886 INFO [LLRealtimeSegmentDataManager_table__0__224__20220927T0712Z] [telemetry__0__224__20220927T0712Z] Consumed 0 events from (rate:0.0/s), currentOffset=261332557, numRowsConsumedSoFar=633915, numRowsIndexedSoFar=633915
    h
    k
    +5
    • 8
    • 21
  • n

    Nagendra Gautham Gondi

    09/27/2022, 7:18 PM
    Hi, Till yesterday everything was fine, I was able to ingest data into Realtime table. However, today I started facing this issue where the existing tables had some corrupted segments showing as BAD. Also, created tables with pre-defined segments are showing in a BAD state. I followed trouble shooting steps mentioned in previous threads that include reloading segments, resetting the segments, deleting the segments, rebalancing etc. None of them started ingestion process again. Also, I deleted the entire cluster and re-configured everything, but still I am facing the same issue even after creating the table successfully. Server Exception:
    Copy code
    Caught exception in state transition from OFFLINE -> ONLINE for resource: caseData_REALTIME,
    Controller:
    Copy code
    Reading segments debug info from servers: [Server_pinot-server-0.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098] for table: caseData_REALTIME
    Server: Server_pinot-server-0.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098 returned error: 404
    m
    n
    • 3
    • 12
  • m

    Mohit Garg4628

    09/28/2022, 4:30 AM
    Hi, We are using pinot latest version, we are trying to run query using V2 Engine. Getting following error [ { "message": "SQLParsingError\njava.lang.RuntimeException Error composing query plan for: select catalog_id from catalog_views_test\n\tat org.apache.pinot.query.QueryEnvironment.planQuery(QueryEnvironment.java:131)\n\tat org.apache.pinot.broker.requesthandler.MultiStageBrokerRequestHandler.handleRequest(MultiStageBrokerRequestHandler.java:147)\n\tat org.apache.pinot.broker.requesthandler.MultiStageBrokerRequestHandler.handleRequest(MultiStageBrokerRequestHandler.java:125)\n\tat org.apache.pinot.broker.requesthandler.BrokerRequestHandler.handleRequest(BrokerRequestHandler.java:47)\n...\nCaused by: java.lang.NumberFormatException: null\n\tat java.base/java.lang.Integer.parseInt(Integer.java:614)\n\tat java.base/java.lang.Integer.parseInt(Integer.java:770)\n\tat org.apache.pinot.core.transport.ServerInstance.<init>(ServerInstance.java:63)\n\tat org.apache.pinot.query.routing.WorkerInstance.<init>(WorkerInstance.java:40)", "errorCode": 150 } ] Please help for the same. We are exploring join feature of pinot Thanks
    r
    p
    • 3
    • 10
  • m

    Mayank

    09/28/2022, 4:31 AM
    @Rong R ^^
  • e

    Edgaras Kryževičius

    09/28/2022, 6:52 AM
    Hi, I am trying to run spark 3.2 ingestion job with Pinot 0.11.0, but I keep getting this error:
    Copy code
    Caused by: java.lang.ClassNotFoundException: org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner
    Here is my spark-submit command:
    Copy code
    spark-submit \
    --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
    --master local \
    --deploy-mode client \
    --conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins" \
    --conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
    --conf "spark.executor.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
    local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar -jobSpecFile ${PINOT_DISTRIBUTION_DIR}/spark_job_spec.yaml
    Here is my spark_job_spec.yaml file:
    Copy code
    executionFrameworkSpec:
      name: 'spark'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentUriPushJobRunner'
      segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentMetadataPushJobRunner'
      extraConfigs:
        stagingDir: /path/to/staging
    jobType: SegmentCreationAndTarPush
    inputDirURI: '/path/to/input'
    outputDirURI: '/path/to/output'
    overwriteOutput: true
    pinotFSSpecs:
        - scheme: adl2
          className: org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS
          configs:
            accountName: 'account-name'
            accessKey: 'sharedAccessKey'
            fileSystemName: 'fs-name'
    recordReaderSpec:
        dataFormat: 'parquet'
        className: 'org.apache.pinot.plugin.inputformat.parquet.ParquetNativeRecordReader'
    tableSpec:
        tableName: 'spire'
    pinotClusterSpecs:
        - controllerURI: '<http://50.107.051.240:9000>'
    ✅ 1
    n
    • 2
    • 1
  • p

    Piyush Chauhan

    09/28/2022, 10:52 AM
    Getting this very frequently for all the tables.
    m
    n
    k
    • 4
    • 10
  • e

    Edgaras Kryževičius

    09/28/2022, 11:47 AM
    I am working airlineStats example, with Pinot 0.11.0 and trying to do spark 3.2 ingestion job. Default example works, but when I change inputDirURI to ADLS instead of local file system and change PinotFSSpecs scheme, I start getting error:
    Copy code
    Caused by: java.lang.IllegalStateException: PinotFS for scheme: abfs has not been initialized
    This is spark command I am running:
    Copy code
    spark-submit \
    --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
    --master local \
    --deploy-mode client \
    --conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins" \
    --conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
    --conf "spark.executor.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-3.2/pinot-batch-ingestion-spark-3.2-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-adls/pinot-adls-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" \
    local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar -jobSpecFile ${PINOT_DISTRIBUTION_DIR}/SparkIngestionJob.yaml
    SparkIngestionJob.yaml:
    Copy code
    executionFrameworkSpec:
      name: 'spark'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentUriPushJobRunner'
      segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark3.SparkSegmentMetadataPushJobRunner'
    
      extraConfigs:
        stagingDir: examples/batch/airlineStats/staging
    
    jobType: SegmentCreationAndTarPush
    
    inputDirURI: '<abfs://fs@accountname/...>'
    includeFileNamePattern: 'glob:**/*.avro'
    
    outputDirURI: 'examples/batch/airlineStats/segments'
    
    overwriteOutput: true
    pinotFSSpecs:
        - scheme: adl2
          className: org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS
          configs:
            accountName: '..'
            accessKey: '..'
            fileSystemName: '..'
    
    recordReaderSpec:
      dataFormat: 'avro'
      className: 'org.apache.pinot.plugin.inputformat.avro.AvroRecordReader'
    
    tableSpec:
      tableName: 'airlineStats'
      schemaURI: '<http://20.207.206.121:9000/tables/airlineStats/schema>'
      tableConfigURI: '<http://20.207.206.121:9000/tables/airlineStats>'
    
    segmentNameGeneratorSpec:
      type: normalizedDate
      configs:
        segment.name.prefix: 'airlineStats_batch'
        exclude.sequence.id: true
    
    pinotClusterSpecs:
      - controllerURI: '<http://20.207.206.121:9000>'
    
    pushJobSpec:
      pushParallelism: 2
      pushAttempts: 2
      pushRetryIntervalMillis: 1000
    I am also attaching my values.yml file, which is used to deploy Pinot using helm.
    values.yaml
    m
    n
    • 3
    • 5
  • t

    Tommaso Peresson

    09/28/2022, 2:50 PM
    Hello, I'm running pinot on gke and currently during query time the cpu usage never ramps up; whereas on a bare metal installation with the same setup it was 100% all the time. Do you know why this would be? I'm running 0.11.0
    m
    a
    • 3
    • 16
  • k

    Ken Krugler

    09/28/2022, 3:25 PM
    We recently updated our Pinot cluster from a patched version of 0.9 to 0.10. The following query now returns different (and incorrect) results:
    Copy code
    SELECT sum(metric) AS sumMetric, key  
    FROM table 
    WHERE dim1 = 'xx' AND dim2 >= 19144 AND dim2 <= 19173 
    AND dim3 NOT IN ('yy', 'zz') 
    GROUP BY key ORDER BY sumMetric DESC LIMIT 3
    r
    j
    • 3
    • 59
  • t

    Thomas Steinholz

    09/28/2022, 3:26 PM
    hi all, running into some issues getting the
    RealtimeToOfflineSegmentsTask
    running… I’ve been following the guide, added the task config, but the task stays at the status of
    NOT_STARTED
    with a
    {}
    task config in the task view giving a
    404 error
    when trying to run. Any idea what is not correctly configured?
    m
    n
    x
    • 4
    • 44
  • n

    Nizar Hejazi

    09/28/2022, 8:26 PM
    Hello, when running the following Presto query on top of Presto-Pinot-connector:
    SELECT id FROM role_with_company WHERE (isPartialAdmin IS NULL)=true AND company='{company_id}'
    I get -for some values of
    company_id
    - the following
    java.lang.IndexOutOfBoundsException
    exception back:
    Copy code
    PrestoExternalError(type=EXTERNAL, name=PINOT_EXCEPTION, message="Query SELECT "id" FROM role_with_company WHERE  (("company" = {company_id}) AND (("isPartialAdmin" IS NULL) = true)) LIMIT 100000 encountered exception {"message":"QueryExecutionError:\njava.lang.IndexOutOfBoundsException\n\tat java.base/java.nio.Buffer.checkIndex(Buffer.java:687)\n\tat java.base/java.nio.DirectCharBufferU.get(DirectCharBufferU.java:269)\n\tat org.roaringbitmap.buffer.MappeableArrayContainerCharIterator.nextAsInt(MappeableArrayContainer.java:1876)\n\tat org.roaringbitmap.buffer.ImmutableRoaringBitmap$ImmutableRoaringIntIterator.next(ImmutableRoaringBitmap.java:113)","errorCode":200} with pinot query "SELECT "id" FROM role_with_company WHERE  (("company" = {company_id}) AND (("isPartialAdmin" IS NULL) = true)) LIMIT 100000"", query_id=20220928_202056_30456_i2zba)
    isPartialAdmin is a boolean dimension dictionary-encoded field. The error is happening very frequently.
    👀 1
    m
    r
    j
    • 4
    • 16
  • t

    Tiger Zhao

    09/28/2022, 8:30 PM
    Hi, I use S3 as a deepstore, and I noticed that the segments under Deleted_Segments aren't being deleted at all. I also set
    controller.deleted.segments.retentionInDays=1
    . Is this expected? And is it safe to manually delete the segments under that folder?
  • n

    Neeraja Sridharan

    09/28/2022, 11:14 PM
    Hello team 👋 We have an offline table in Pinot with ~ 1 month of prod data that we initially started as a POC for checking partition based segment pruning. We want to extend the table (with same table name) to re-load all of our prod data with a different replication in table config (2 -> 3) & data type change to a column in the schema (string -> long). We are fine to overwrite the existing data in this table. What are the options to delete/drop & recreate with same table name (or) update existing table with the updated table config & schema config before reloading all the data? Not sure if the Swagger APIs like Table/updateTableConfig or Table/deleteTable will solve this. Appreciate any thoughts/recommendations regarding this 🙇‍♀️
    m
    • 2
    • 10
  • r

    robert zych

    09/29/2022, 4:03 AM
    Why is the following query is returning the wrong result? The cardinality of d1 is 3K and the cardinality of d2 is 200k. numSegmentsMatched is 1. numGroupsLimitReached is false.
    Copy code
    select d1, d2, max(metric) as max_metric
    from t
    where datetrunc('DAY', created_at_epoch_ms) = 1660867200000
    group by d1, d2
    order by max_metric desc
    limit 1
    m
    • 2
    • 7
  • p

    Prakhar Pande

    09/29/2022, 8:20 AM
    Hi, I have set up s3 as deepstore. Just wanted to know if it is possible to restore data if suddenly one server goes down and never comes back? what will happen to the consuming segments?
    m
    • 2
    • 3
  • p

    Piyush Chauhan

    09/29/2022, 8:29 AM
    All the tables are in bad state. They are not able to ingest any data from kafka. PLEASE HELP
    Copy code
    [
      {
        "tableName": "packages_REALTIME",
        "numSegments": 79,
        "numServers": 2,
        "numBrokers": 2,
        "segmentDebugInfos": [],
        "serverDebugInfos": [],
        "brokerDebugInfos": [
          {
            "brokerName": "Broker_pinot-broker-0.dev-pinot-broker-headless.svc.cluster.local_8099",
            "idealState": "ONLINE",
            "externalView": "ONLINE"
          },
          {
            "brokerName": "Broker_pinot-broker-1.pinot-broker-headless.svc.cluster.local_8099",
            "idealState": "ONLINE",
            "externalView": "ONLINE"
          }
        ],
        "tableSize": {
          "reportedSize": "5 MB",
          "estimatedSize": "5 MB"
        },
        "ingestionStatus": {
          "ingestionState": "UNHEALTHY",
          "errorMessage": "Did not get any response from servers for segment: packages__0__9__20220927T1248Z"
        }
      }
    ]
    n
    • 2
    • 4
  • a

    Alice

    09/29/2022, 8:58 AM
    Hi team, is it safe to migrate tables with partial upsert or full upser feature from one group of servers to another group of servers without data loss?
    m
    j
    • 3
    • 4
  • t

    Tommaso Peresson

    09/29/2022, 4:00 PM
    Hi everybody. Are you planning to support
    distinctcounthll
    in the merge-rollup task in the near future?
    m
    • 2
    • 1
  • a

    Abhijeet Kushe

    09/29/2022, 5:47 PM
    <!here> We recently added 2 server replicas in our pinot cluster on k8s. The table realTimeconfig also has 3 replicas configured.So each segment is present on every pod.After that I made changes to the schema and set the reload segments flag to true.I noticed that the segments of all pods in k8s happens at the same time due to which application was down for 1 hour.We have 652 segments with 1 day flush time.Total records 7143718 with skipUpsert = true.I do know rebalance segment has a feature of Same problem occurs with server pod restarts from argo.Is there a way to do the segment reload in an uptime fashion.(I do know that rebalance has a flag minAvailable replicas) Does reload have that feature
    m
    • 2
    • 16
  • k

    Ken Krugler

    09/30/2022, 12:18 AM
    A shout-out to @Jackie and @Rong R for their help in solving an issue that surfaced after we did an update to Pinot 0.10. This community is awesome! And I (re)learned something useful about star-trees…once you have a filter in the query that includes a field not in the star-tree dimensions, then you’re back to scanning records, which can change the results for groups when the key is high cardinality. https://apache-pinot.slack.com/archives/C011C9JHN7R/p1664378704003049
    👍 2
    🎉 2
    🍷 3
    ❤️ 4
  • e

    Edgaras Kryževičius

    09/30/2022, 9:10 AM
    I am working with Pinot 0.11.0 and Spark 3.2. I am doing spark ingestion job and getting this error:
    Copy code
    Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.240.0.12 executor 1): com.azure.storage.file.datalake.models.DataLakeStorageException: Status code 409, "{"error":{"code":"PathAlreadyExists","message":"The specified path already exists.\nRequestId:2afa0318-501f-0004-38aa-d4c373000000\nTime:2022-09-30T08:57:09.6163232Z"}}"
    In sparkIngestionJobSpec.yaml file I have outputDirUri set to:
    outputDirUri='<adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data/spireStatsV2/>'
    In controller configurations:
    controller.data.dir=<adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data>
    I can see that once I started spark job after some time, it created segment file spireStatsV2_batch.tar.gz in
    <adl2://fs@ac.dfs.core.windows.net/qa/pinot/controller-data/spireStatsV2/event_date=2022-08-20/event_type=other/>
    . I imagine that same spark job tries to make a file with the same name on the same path and then it fails. How could I fix it?
    m
    k
    • 3
    • 7
  • w

    Wojciech Wasik

    09/30/2022, 12:37 PM
    Hello, I have some trouble ingesting data from
    csv
    file. I have the fallowing configs
    n
    t
    +2
    • 5
    • 20
  • s

    Slackbot

    09/30/2022, 3:28 PM
    This message was deleted.
  • e

    Enzo DECHAENE

    09/30/2022, 3:43 PM
    Hello everyone! I've been using Pinot for a few months. I noticed that my realtime table segments are stored on the same server when they should be balanced on all my servers by default. (that's what i'm looking for) I have four servers with the tag "*DefaultTenant_REALTIME*" and a table with 80 segments stored on my second server. Am I missing something important ?
    recette.json
    l
    k
    n
    • 4
    • 14
  • a

    Ali Atıl

    09/30/2022, 8:19 AM
    Hi everyone, i am getting an error when i try to create a hybrid table. I create the realtime table first succesfully but when i try to create offline table afterwards it gives me this error. I have been using the same schema and config file with no problem before upgrading to 0.11.0
    {"code":400,"error":"TableConfigs: mytable already exists. Use PUT to update existing config"}
    i use commands below inside the controller shell to create my tables.
    bin/pinot-admin.sh AddTable -schemaFile schema.json -tableConfigFile offline.json -exec
    bin/pinot-admin.sh AddTable -schemaFile schema.json -tableConfigFile realtime.json -exec
    schema.jsonoffline.jsonrealtime.json
    n
    • 2
    • 5
  • t

    troywinter

    10/01/2022, 11:56 AM
    I’m getting error when adding a new server tenant to my pinot cluster with version 0.9.3, request is:
    Copy code
    {
      "tenantRole": "SERVER",
      "tenantName": "Tracker",
      "offlineInstances": 1,
      "realtimeInstances": 1
    }
    and the response is:
    Copy code
    {
      "_code": 500,
      "_error": "Index 0 out of bounds for length 0"
    }
    I’m not able finding any logs related to this endpoint in the controller logs.
    x
    • 2
    • 2
  • p

    Prakhar Pande

    10/03/2022, 2:46 PM
    Hi , could anyone please help me to understand why pinot is not emitting metrics. I am following this doc https://docs.pinot.apache.org/operators/tutorials/monitor-pinot-using-prometheus-and-grafana
    t
    • 2
    • 18
  • l

    Luis Fernandez

    10/03/2022, 7:40 PM
    hey my friends i’m having an issue while upgrading to
    0.11.0
    from
    0.10.0
    i’m upgrading the controller first and i’m getting the following exception:
    Copy code
    java.lang.RuntimeException: Caught exception while initializing ControllerFilePathProvider
    	at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:555) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.BaseControllerStarter.setUpPinotController(BaseControllerStarter.java:374) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.BaseControllerStarter.start(BaseControllerStarter.java:322) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:118) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:87) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:251) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:304) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:250) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:196) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:187) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:165) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:196) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    Caused by: org.apache.pinot.controller.api.resources.InvalidControllerConfigException: Caught exception while initializing file upload path provider
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:107) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:553) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	... 20 more
    Caused by: java.lang.NullPointerException
    	at org.apache.pinot.shaded.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:770) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.BlobId.of(BlobId.java:114) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.BlobId.fromPb(BlobId.java:118) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.BlobInfo.fromPb(BlobInfo.java:1160) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.Blob.fromPb(Blob.java:958) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.StorageImpl.get(StorageImpl.java:330) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at com.google.cloud.storage.Bucket.get(Bucket.java:827) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.plugin.filesystem.GcsPinotFS.existsDirectory(GcsPinotFS.java:264) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.plugin.filesystem.GcsPinotFS.exists(GcsPinotFS.java:329) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.plugin.filesystem.GcsPinotFS.exists(GcsPinotFS.java:142) ~[pinot-gcs-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:71) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:553) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    	... 20 more
    it’s a NullPointer and I’m not sure why is it getting this error when it shouldn’t? maybe I need to give it some new permissions to my SA that didn’t have before? or what could be causing this, this is properly working in
    0.10.0
    m
    h
    • 3
    • 31
  • t

    Tao Hu

    10/03/2022, 9:26 PM
    Hi team, does the new function
    HISTOGRAM
    in 0.11.0 support distinct count? From the documentation seems like it does not
    m
    • 2
    • 5
  • e

    Eaugene Thomas

    10/04/2022, 2:00 PM
    Hi team , I am trying to add a REALTIME table using the REST API’s in controller .
    /tables
    I am getting the response as ,
    Copy code
    {
      "status": "Table test_demo_REALTIME succesfully added"
    }
    The table set to ingest from kafka . But the Controller UI doesn’t show the table name . There is not error trace as well in controller / broker logs . Any help on debugging this ? PS : I have added the schema for
    test_demo
    previously ( this in shown in Controller UI )
    m
    • 2
    • 11
1...575859...166Latest