https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • a

    austin macciola

    01/27/2023, 4:07 PM
    Hi team šŸ‘‹ Me and our Devops are running into an issue we are having trouble solving. It seems we keep hitting the storage limits of our Pinot cluster running on k8s. When this happens we just up the storage resources allocated. However when this happens it seems that a majority of our segments for the REALTIME table we have show a bad status and very generic error codes.
    BROKER_SEGMENT_UNAVAILABLE_ERROR_CODE = 305
    It seems we cannot get them to recover once we have fixed the storage limitations. And the only way to resolve the issue is a full rebuild of the Pinot cluster. Attached is a quick screen recording of myself just poking around the Pinot Cluster Manager UI to show some of the errors
    Screen Recording 2023-01-27 at 10.04.40 AM.mov
    m
    • 2
    • 3
  • l

    Lvszn Peng

    01/28/2023, 9:27 AM
    It seems that if group by is used in sql, the offset will not take effect. Is there any practical way to solve this problem?
    m
    • 2
    • 6
  • s

    Steven Hall

    01/28/2023, 4:38 PM
    Hi Team. I am pretty close to getting Pinot to integrate with OKTA and Kafka, where the Kafka topic and Kafka Schema Registry require an OKTA authentication token. I added io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler to the pinot-controller POM and was able to see this working when I debugged with Zookeeper, Controller, Broker, and Server running in the IDE. I could perform an add table, specifying the Kafka config and pull data from Kafka. So pretty close. After that we wanted to see that change work in our pre-prod env. So I built the jar with the addition of the strimzi jar in the pinot-controller pom. I started pinot via the quickstart in the build. build_dir> ./bin/quick-start-batch.sh I tried to add the table again now using the same table configuration I used earlier to connect to Confluent Kafka. I got the following error:
    Copy code
    Caused by: java.lang.ClassCastException: class io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler cannot be cast to class org.apache.pinot.shaded.org.apache.kafka.common.security.auth.AuthenticateCallbackHandler (io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler and org.apache.pinot.shaded.org.apache.kafka.common.security.auth.AuthenticateCallbackHandler are in unnamed module of loader 'app')
    I took a look at the strimzi package. JaasClientOauthLoginCallbackHandler implements apache.kafka.common.security.auth.AuthenticateCallbackHandler So this error makes sense. The strimzi package does not implement org.apache.pinot.shaded.org.apache.kafka.common.security.auth.AuthenticateCallbackHandler OK, seems I need to shade the strimzi package so that JaasClientOauthLoginCallbackHandler implements org.apache.pinot.shaded.org.apache.kafka.common.security.auth.AuthenticateCallbackHandler First, is my analysis correct? What is a general outline of the change that needs to be made? Should this be done in the pinot-controller pom? What is the process for shading the strimzi package? I gave this a go in the main pom and it seemingly had no affect. So apparently I am missing something. If I unpack the uber jar after the build what would I expect to see? What is the impact on any dependencies that the strimzi jar may include? Thanks
    m
    n
    • 3
    • 7
  • r

    Richard Walker

    01/30/2023, 7:50 AM
    Afternoon all. I'm taking Pinot for a spin (deployed to Kubernetes) and I'm having trouble with the V2 query engine. I've enabled it in the config as per the instructions, but when I try to execute a query, I get this error message:
    Copy code
    [
      {
        "message": "SQLParsingError:\njava.lang.RuntimeException: Error composing query plan for: select ID from testRecords\n\tat org.apache.pinot.query.QueryEnvironment.planQuery(QueryEnvironment.java:143)\n\tat org.apache.pinot.broker.requesthandler.MultiStageBrokerRequestHandler.handleRequest(MultiStageBrokerRequestHandler.java:157)\n\tat org.apache.pinot.broker.requesthandler.MultiStageBrokerRequestHandler.handleRequest(MultiStageBrokerRequestHandler.java:132)\n\tat org.apache.pinot.broker.requesthandler.BrokerRequestHandler.handleRequest(BrokerRequestHandler.java:47)\n...\nCaused by: java.lang.UnsupportedOperationException: unsupported!\n\tat org.apache.pinot.query.type.TypeFactory.toRelDataType(TypeFactory.java:82)\n\tat org.apache.pinot.query.type.TypeFactory.createRelDataTypeFromSchema(TypeFactory.java:49)\n\tat org.apache.pinot.query.catalog.PinotTable.getRowType(PinotTable.java:49)\n\tat org.apache.calcite.sql.validate.EmptyScope.resolve_(EmptyScope.java:161)",
        "errorCode": 150
      }
    ]
    This is both with queries on a single table, and queries using a join. Any help would be greatly appreciated!
    m
    y
    • 3
    • 14
  • h

    Huaqiang He

    01/30/2023, 8:59 AM
    Hi team, I feel it might be good to improve the function
    LASTWITHTIME
    and
    FIRSTWITHTIME
    . The first 2 arguments are column name, which can be unquoted or double quoted. The function also returns something when the column name is single quoted, but the returned value is probably not the expected. How about returning an error when the column name is single quoted?
    m
    • 2
    • 1
  • m

    Mathieu Alexandre

    01/30/2023, 1:46 PM
    Hello In our pinot 0.9.3, few old segments persists in CONSUMING state even if the table_config specify realtime.segment.flush.threshold.time to 24h. (ingestion status is healthy in debug api). Is there a way to force segment completion in a realtime table ?
    f
    m
    +2
    • 5
    • 14
  • e

    Enzo DECHAENE

    01/31/2023, 2:14 PM
    Hi team, I am trying to create a table with a startree index and a timestamp index, is it possible to use the virtual column generated by the timestamp index ($ts$DAY) in the startree index?
    m
    j
    • 3
    • 9
  • a

    Ayush Kumar Jha

    02/01/2023, 6:30 AM
    Hi team, We are expanding our pinot usage from batch to realtime .Using this controller config
    Copy code
    controller.host=hostname
    controller.port=80
    controller.access.protocols.http.port=80
    controller.helix.cluster.name=PinotCluster
    controller.zk.str=zookeeper-links
    controller.enable.split.commit=true
    controller.data.dir=<abfs://path/to/data/directory>
    controller.local.temp.dir=/tmp/
    pinot.controller.storage.factory.class.adl2=org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS                                                               
    pinot.controller.storage.factory.adl2.accountName=accountname
    pinot.controller.storage.factory.adl2.accessKey=accesskey                    
    pinot.controller.storage.factory.adl2.fileSystemName=fs-name
    pinot.controller.segment.fetcher.protocols=file,http,adl2,abfs
    pinot.controller.segment.fetcher.adl2.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher                                                      
    controller.offlineSegmentIntervalChecker.initialDelayInSeconds=172800
    but when trying to start the controller getting this error
    Copy code
    Caused by: java.lang.IllegalStateException: PinotFS for scheme: abfs has not been initialized                                                                
            at org.apache.pinot.shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:518) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at org.apache.pinot.spi.filesystem.PinotFSFactory.create(PinotFSFactory.java:78) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.<init>(ControllerFilePathProvider.java:70) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at org.apache.pinot.controller.api.resources.ControllerFilePathProvider.init(ControllerFilePathProvider.java:49) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at org.apache.pinot.controller.BaseControllerStarter.initControllerFilePathProvider(BaseControllerStarter.java:553) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    When I give the path to local directory in this parameter
    controller.data.dir=/home/centos/pinot-segments
    this works fine Thanks
    n
    m
    s
    • 4
    • 12
  • l

    Lee Wei Hern Jason

    02/01/2023, 9:42 AM
    Hi Team: I am on pinot v0.12.0 and we have access control on our cluster. I have the following qns: 1. I am trying to identify the source of the queries. In the logs, the clientIP is being emitted by is always default value as
    unknown
    . From the code, i notice that it is always setting
    pinot.broker.request.client.ip.logging
    to the default false. Is there a way to enable this ? or identify who is querying ? 2. I am trying to isolate only query console using to only users and admin has all access. This config
    queryConsoleOnlyView
    is on cluster level. I saw this discussion https://github.com/apache/pinot/pull/6685 , will this config come out on user level in 13 release ?
    x
    j
    • 3
    • 4
  • m

    Maaz

    02/01/2023, 11:18 AM
    Hi all I am new in pinot. can you please help me How to compare datetime of two columns in pinot
    s
    m
    • 3
    • 6
  • m

    Mathieu Alexandre

    02/01/2023, 10:24 PM
    Hi all after using the API to push my segments to the offline part of a hybrid table (cluster migration), their metadata cannot be retrieved by pinot :
    Copy code
    Table name: test_OFFLINE does not match table type: REALTIME.
    They exist in zookeeper and documents can be queried. Am i missing something ?
    s
    • 2
    • 22
  • m

    Michael Latta

    02/01/2023, 10:54 PM
    What API are you using to access the metadata?
    āž• 1
    m
    • 2
    • 1
  • a

    Alice

    02/02/2023, 9:58 AM
    Hi team, I’ve a question, can we enable both dedup and ingestion aggregation for one realtime table? And how can we deal with late event? Can we set a buffer time?
    m
    n
    • 3
    • 9
  • l

    LC

    02/02/2023, 5:02 PM
    Hi team, I'm new to Pinot. I'm doing a POC on Pinot. Based on your experience, I wonder how many resources do you usually set up ? Thank you for your help. https://docs.pinot.apache.org/operators/tutorials/deployment-pinot-on-kubernetes
    a
    • 2
    • 3
  • p

    Phil Sarkis

    02/02/2023, 6:52 PM
    Hey all, I am back and this time I have made it past the issues I have was having previously. However, I am trying to convert a schema I have into Pinot schema format. The schema that I have is formed something like this:
    s
    • 2
    • 1
  • p

    Phil Sarkis

    02/02/2023, 6:53 PM
    All of the examples on the documentation don’t really deal with the case of having multiple layers and certain items within the schema having items of their own and so on. How can I convert this into Pinot format?
    a
    • 2
    • 3
  • a

    austin macciola

    02/02/2023, 10:38 PM
    When running the
    /debug/tables/<tablename>
    api on one of my realtime tables i get the following response
    Copy code
    [
      {
        "tableName": "template_solution_events_REALTIME",
        "numSegments": 1810,
        "numServers": 3,
        "numBrokers": 3,
        "segmentDebugInfos": [],
        "serverDebugInfos": [
          {
            "serverName": "Server_pinot-server-2.pinot-server-headless.pinot.svc.cluster.local_8098",
            "numMessages": 6,
            "errors": 8
          }
        ],
        "brokerDebugInfos": [],
        "tableSize": {
          "reportedSize": "98 GB",
          "estimatedSize": "98 GB"
        },
        "ingestionStatus": {
          "ingestionState": "UNHEALTHY",
          "errorMessage": "Did not get any response from servers for segment: template_solution_events__0__112__20230202T1852Z"
        }
      }
    ]
    I am also having multiple segments show up as
    BAD
    in the Pinot UI
    s
    • 2
    • 5
  • a

    Alice

    02/03/2023, 8:29 AM
    Hi team, got a question about pinot pod memory usage. There’s large difference displayed in below graph after pod is restarted. The metric in the graph is *max(container_memory_working_set_bytes{pod=~ā€œpinot-server-.ā€œ, namespace=ā€œ$namespaceā€}) by (pod). Could you help understand the difference in memory usage before and after pod restarted at about 14:00?
    s
    h
    +2
    • 5
    • 70
  • a

    Abhishek Tomar

    02/03/2023, 7:15 PM
    Can someone please help with this issue https://github.com/apache/pinot/issues/10225
    n
    j
    a
    • 4
    • 40
  • a

    Ashwin Raja

    02/03/2023, 10:56 PM
    howdy hey! I'm trying to get my pinot realtime consumer to read from an earlier offset, since whatever offset it's currently on is out of range:
    Copy code
    [Consumer clientId=dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0, groupId=null] Seeking to offset 1725541 for partition dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0
    4
    [Consumer clientId=dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0, groupId=null] Fetch position FetchPosition{offset=1725541, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[<http://b-1.streaming3azs.ak740j.c14.kafka.us-west-2.amazonaws.com:9092|b-1.streaming3azs.ak740j.c14.kafka.us-west-2.amazonaws.com:9092> (id: 1 rack: usw2-az1)], epoch=0}} is out of range for partition dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0, resetting offset
    3
    [Consumer clientId=dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0, groupId=null] Resetting offset for partition dataset-version.52863422-abdf-4f69-b47a-ff166941bad2-0 to position FetchPosition{offset=34007, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[<http://b-1.streaming3azs.ak740j.c14.kafka.us-west-2.amazonaws.com:9092|b-1.streaming3azs.ak740j.c14.kafka.us-west-2.amazonaws.com:9092> (id: 1 rack: usw2-az1)], epoch=0}}.
    2
    Consumed 0 events from (rate:0.0/s), currentOffset=1725541, numRowsConsumedSoFar=0, numRowsIndexedSoFar=0
    so it looks like Pinot sees that that offset is out of range and tries to reset the offset, but then right after that it's still using a
    currentOffset
    for the out of range offset?
    m
    n
    +2
    • 5
    • 29
  • a

    Alice

    02/04/2023, 4:32 PM
    Hi team, I tried to replicate OOM Killed exception with such jvm options, and found below exception. java.lang.OutOfMemoryError: Direct buffer memory.
    m
    • 2
    • 10
  • s

    Shubham Kumar

    02/06/2023, 2:40 PM
    Hi team, I want to disable segmentLogger error messages in prod pinot cluster. To implement this, I was testing pinot swagger’s Logger api. I could find there were three loggers
    Copy code
    root, org.reflections, org.apache.pinot.tools.admin
    Can someone help with these doubts: 1. I had set log level to OFF for all the loggers but somehow I was still getting all the server logs. So, changing log levels does not affect server logging? 2. Which logger is responsible for which type of logging?
    m
    • 2
    • 2
  • r

    Rakesh Bobbala

    02/06/2023, 6:18 PM
    Hello Team, I want to schedule a batch ingestion into an offline table through swagger API my payload
    Copy code
    {
      "executionFrameworkSpec": {
        "name": "standalone",
        "segmentGenerationJobRunnerClassName": "org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner",
        "segmentTarPushJobRunnerClassName": "org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner",
        "segmentUriPushJobRunnerClassName": "org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner",
        "segmentMetadataPushJobRunnerClassName": "org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner"
      },
      "jobType": "SegmentCreationAndMetadataPush",
      "inputDirURI": "<s3://test/spark_dumps/>",
      "includeFileNamePattern": "glob:**/*.csv",
      "searchRecursively": true,
      "outputDirURI": "<s3://test/output/>",
      "overwriteOutput": true,
      "pinotFSSpecs": [
        {
          "scheme": "s3",
          "className": "org.apache.pinot.plugin.filesystem.S3PinotFS"
        }
      ],
      "recordReaderSpec": {
        "dataFormat": "csv",
        "className": "org.apache.pinot.plugin.inputformat.csv.CSVRecordReader",
        "configClassName": "org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig"
      },
      "tableSpec": {
        "tableName": "backend_batch"
      },
      "pinotClusterSpecs": [
        {
          "controllerURI": "<http://localhost:9000>"
        }
      ],
      "pushJobSpec": {
        "pushParallelism": 2,
        "pushAttempts": 2,
        "pushRetryIntervalMillis": 1000,
        "segmentUriSuffix": "backend_batch/"
      }
    }
    However, I couldn't upload the segment through the API as it accepts payload in binary format Can someone help me to troubleshoot this ?
    m
    • 2
    • 23
  • a

    Amol

    02/07/2023, 12:46 AM
    Just curious to know why we have two services? Is there any specific access control that is planned between external vs internal?
    Copy code
    pinot-controller                            ClusterIP      10.111.116.243   <none>        9000/TCP                              97m
    pinot-controller-external                   LoadBalancer   10.103.171.255   <pending>     9000:32611/TCP                        97m
    m
    • 2
    • 3
  • l

    Lvszn Peng

    02/07/2023, 3:06 AM
    Hello Team, Want to know, what is the reason for the high scheduler Wait Ms
    m
    • 2
    • 5
  • a

    Abhishek Tomar

    02/07/2023, 9:40 AM
    Best place to store my jks certificates in the docker image?
    m
    a
    • 3
    • 2
  • p

    piby

    02/07/2023, 12:50 PM
    What is the difference between minion and minion stateless in pinot helm chart? Which one is recommended for production? https://github.com/apache/pinot/blob/5e0a8dbaa11986079c8801b9f423fc49b5681205/kubernetes/helm/pinot/values.yaml#L448
  • m

    Maaz

    02/08/2023, 5:20 AM
    Hi all we are connected pinot with presto and add database presto in apache supserset. When we run query with datatime it return error.
    Copy code
    Unexpected error
    
    Error: {'message': 'UNAVAILABLE: io exception\nChannel Pipeline: 
    [SslHandler#0, ProtocolNegotiators$ClientTlsHandler#0, WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]', 
    'errorCode': 65536, 'errorName': 'GENERIC_INTERNAL_ERROR', 'errorType': 'INTERNAL_ERROR', 'boolean': False, 'failureInfo': 
    {'type': 'io.grpc.StatusRuntimeException', 'message': 'UNAVAILABLE: io exception\nChannel Pipeline: 
    [SslHandler#0, ProtocolNegotiators$ClientTlsHandler#0, WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]',
    'cause': {'type': 'javax.net.ssl.SSLHandshakeException', 
    'message': 'PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException:
    unable to find valid certification path to requested target', 'cause': {'type': 'sun.security.validator.ValidatorException',
    'message': 'PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException
    unable to find valid certification path to requested target', 'cause': {'type': 'sun.security.provider.certpath.SunCertPathBuilderException',
    'message': 'unable to find valid certification path to requested target', 'suppressed': [], 
    'stack': ['sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)', 
    'sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)', 
    'java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)', 'sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:451)', 
    'sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)', 'sun.security.validator.Validator.validate(Validator.java:271)', 
    'sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315)', 
    'sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:278)', '
    sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)', '
    sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)', '
    sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)', 
    'sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)', 'sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)', 
    'java.security.AccessController.doPrivileged(Native Method)', 'sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)',
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1550)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.access$1900(SslHandler.java:167)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslTasksRunner.run(SslHandler.java:1737)',
    'java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)', 
    'java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)', 
    'java.lang.Thread.run(Thread.java:750)']}, 
    'suppressed': [], 'stack': ['sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456)',
    'sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)', 'sun.security.validator.Validator.validate(Validator.java:271)', 
    'sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315)', 
    'sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:278)', 
    'sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)', 
    'sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)', 'sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)', 
    'java.security.AccessController.doPrivileged(Native Method)', 'sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1550)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.access$1900(SslHandler.java:167)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslTasksRunner.run(SslHandler.java:1737)', 
    'java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)', '
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)', 
    'java.lang.Thread.run(Thread.java:750)']},  
    'suppressed': [], 'stack': ['sun.security.ssl.Alert.createSSLException(Alert.java:131)', 
    'sun.security.ssl.TransportContext.fatal(TransportContext.java:324)', 'sun.security.ssl.TransportContext.fatal(TransportContext.java:267)', 
    'sun.security.ssl.TransportContext.fatal(TransportContext.java:262)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)',
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)', 
    'sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)', 
    'sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)', 'sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)', 
    'sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)', 
    'java.security.AccessController.doPrivileged(Native Method)', 'sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1550)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.access$1900(SslHandler.java:167)', 
    'io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslTasksRunner.run(SslHandler.java:1737)', 
    'java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)', 
    'java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)', 'java.lang.Thread.run(Thread.java:750)']}, 
    'suppressed': [], 'stack': ['io.grpc.Status.asRuntimeException(Status.java:535)',
    'io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:648)', 
    'com.facebook.presto.pinot.PinotSegmentPageSource.getNextPage(PinotSegmentPageSource.java:204)', 
    'com.facebook.presto.operator.ScanFilterAndProjectOperator.processPageSource(ScanFilterAndProjectOperator.java:295)', 
    'com.facebook.presto.operator.ScanFilterAndProjectOperator.getOutput(ScanFilterAndProjectOperator.java:260)', 
    'com.facebook.presto.operator.Driver.processInternal(Driver.java:426)', 
    'com.facebook.presto.operator.Driver.lambda$processFor$9(Driver.java:309)', 
    'com.facebook.presto.operator.Driver.tryWithLock(Driver.java:730)',
    'com.facebook.presto.operator.Driver.processFor(Driver.java:302)', 
    'com.facebook.presto.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1079)',
    'com.facebook.presto.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:165)', 
    'com.facebook.presto.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:603)', 
    'com.facebook.presto.$gen.Presto_0_280_SNAPSHOT_63a0071____20230130_040248_1.run(Unknown Source)', 
    'java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)', 
    'java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)', 'java.lang.Thread.run(Thread.java:750)']}}
  • l

    Lvszn Peng

    02/08/2023, 10:03 AM
    hi team, If there are a lot of LLLRealtimeSegmentDataManager logs, does it mean that pinot is doing segment-related things, which will lead to an increase in query scheduling time-consuming
    • 1
    • 1
  • s

    Sandeep Penmetsa

    02/08/2023, 2:24 PM
    Hi Team, Issue: Slow running queries I am doing a POC on using pinot in one of our user-facing analytics usecase. We are doing a load-testing of a certain query at 1000QPS. Below attached is the index we created and Query. Query: select count(*) as count, type, reelId from reels_poc where type IN (1,3,4) AND reelId IN (ā€˜62f60ac164034f00112da6e4’) AND studentId=ā€˜5eceb7a12db8eb71c60f4bdb’ GROUP BY type, reelId Index:
    Copy code
    "starTreeIndexConfigs": [
            {
              "dimensionsSplitOrder": [
                "reelId",
                 "type",
                "studentId"
              ],
              "skipStarNodeCreationForDimensions": [
    
              ],
              "functionColumnPairs": [
                "COUNT__*"
              ],
              "maxLeafRecords": 100000
            }
          ],
    FYI, We are getting a responsetime of almost around 10sec.
    m
    • 2
    • 7
1...707172...166Latest