https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • s

    suraj sheshadri

    09/02/2022, 10:52 PM
    hello i am facing issue creating pinot table which has column of type struct/array.. the column daro_score in image is being created as string by the infer-schema-json-data utility and i am unable to upload data. Can you please suggest if array type is allowed in pinot. How can i load this column? documentation only speaks that Data type of the dimension column. Can be INT, LONG, FLOAT, DOUBLE, BOOLEAN, TIMESTAMP, STRING, BYTES.
    Copy code
    |-- daro_scores: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- _1: string (nullable = true)
     |    |    |-- _2: double (nullable = true)
    m
    • 2
    • 2
  • j

    Jatin Kumar

    09/04/2022, 6:31 PM
    Hello Everyone, I have installed pinot on k8 and created two tenants(each for broker and server) with name
    default
    and
    test
    . Whenever i query table with
    default
    tenant and request through ingress controller goes to broker with
    test
    tag then it failed How you handle this scenario? whenever i query table with specific tenant then request goes to that broker only through ingress?
    s
    x
    • 3
    • 12
  • e

    Ehsan Irshad

    09/05/2022, 5:53 AM
    Hi I am trying the latest pinot superset image according to documentation But After creating the admin user I am getting 403 forbidden error. May I know what might be the issue or should I use the older image? It seems the admin user is added successfully .
    Copy code
    172.17.0.1 - - [05/Sep/2022:05:48:49 +0000] "GET /superset/welcome/ HTTP/1.1" 200 24616 "-" 
    172.17.0.1 - - [05/Sep/2022:05:48:49 +0000] "GET /superset/recent_activity/1/?limit=6 HTTP/1.1" 403 51 "<http://localhost:8088/superset/welcome/>" 
    172.17.0.1 - - [05/Sep/2022:05:48:49 +0000] "GET /api/v1/dashboard/?q=(filters:!((col:owners,opr:rel_m_m,value:%271%27)),order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:5) HTTP/1.1" 403 24 "<http://localhost:8088/superset/welcome/>" 
    172.17.0.1 - - [05/Sep/2022:05:48:49 +0000] "GET /api/v1/saved_query/?q=(filters:!((col:created_by,opr:rel_o_m,value:%271%27)),order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:5) HTTP/1.1" 403 24 "<http://localhost:8088/superset/welcome/>" 
    172.17.0.1 - - [05/Sep/2022:05:48:49 +0000] "GET /api/v1/chart/?q=(filters:!((col:owners,opr:rel_m_m,value:%271%27)),order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:5) HTTP/1.1" 403 24 "<http://localhost:8088/superset/welcome/>"
    172.17.0.1 - - [05/Sep/2022:05:48:50 +0000] "GET /api/v1/dashboard/_info?q=(keys:!(permissions)) HTTP/1.1" 403 24 "<http://localhost:8088/superset/welcome/>"
    172.17.0.1 - - [05/Sep/2022:05:48:50 +0000] "GET /api/v1/chart/_info?q=(keys:!(permissions)) HTTP/1.1" 403 24 "<http://localhost:8088/superset/welcome/>"
    127.0.0.1 - - [05/Sep/2022:05:49:08 +0000] "GET /health HTTP/1.1" 200 2 "-" "curl/7.74.0"
    x
    e
    • 3
    • 16
  • p

    Piyush Mittal

    09/05/2022, 11:08 AM
    Hi Team , Zookeeper creating too many logs of pinot-broker which filling pvc too fast .Please sugeest how can we reduce it .
    Copy code
    /pinot-shipment-qa/INSTANCES/Broker_shipment-qa-pinot-broker-0.shipment-qa-pinot-broker-headless.qa-team.svc.cluster.local_8099/HISTORY\00\00
    r{
      "id" : "Broker_shipment-qa-pinot-broker-0.shipment-qa-pinot-broker-headless.qa-team.svc.cluster.local_8099",
      "simpleFields" : {
        "LAST_OFFLINE_TIME" : "-1"
      },
      "mapFields" : { },
      "listFields" : {
        "HISTORY" : [ "{DATE=2022-08-10T02:13:43:335, VERSION=0.9.8, SESSION=1000023e82c0005, TIME=1660097623335}", "{DATE=2022-08-10T05:31:44:779, VERSION=0.9.8, SESSION=300001510c30008, TIME=1660109504779}", "{DATE=2022-08-10T07:23:31:448, VERSION=0.9.8, SESSION=300001510c30009, TIME=1660116211448}", "{DATE=2022-08-10T10:12:09:434, VERSION=0.9.8, SESSION=1000023e82c000c, TIME=1660126329434}", "{DATE=2022-08-10T16:26:19:804, VERSION=0.9.8, SESSION=300000391830000, TIME=1660148779804}", "{DATE=2022-08-10T22:13:59:177, VERSION=0.9.8, SESSION=20000fe3c270001, TIME=1660169639177}",
    m
    k
    l
    • 4
    • 5
  • p

    Piyush Chauhan

    09/06/2022, 6:04 AM
    We get this error very frequently while running read query *[{"message":"null:\n191 segments [locations_new__7__33__20220725T0902Z, *****] unavailable","errorCode":305}]* I checked it. Error Code 305 stands for BROKER_SEGMENT_UNAVAILABLE_ERROR_CODE After sometime it got resolved itself. Does anyone know any possible causes?
    l
    • 2
    • 2
  • l

    Loïc Mathieu

    09/06/2022, 3:25 PM
    Hi, I'm using Pinot 0.10.0 I try to setup a REALTIME table to stream Kafka topic messages to a Pinot Table. I'm not an expert on Pinot and it's the first time I setup a REALTIME table so be kind 😉 . When I create the table I see a NPE in the Controller log and the table is not created (in fact it seems to be partialy created as it's not shown in the UI but trying to re-create it fail with table already exists).
    Copy code
    pinot-controller  | 2022/09/06 15:11:30.283 ERROR [PinotTableIdealStateBuilder] [grizzly-http-server-1] Could not get PartitionGroupMetadata for topic: my-topic of table: my-topic_REALTIME
    pinot-controller  | java.lang.NullPointerException: null
    pinot-controller  |     at org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.fetchPartitionCount(KafkaStreamMetadataProvider.java:48) ~[pinot-kafka-2.0-0.10.0-shaded.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    pinot-controller  |     at org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(StreamMetadataProvider.java:66) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    For me NPE is always suspicious (a bug). Here is my table config.
    Copy code
    {
      "tableName": "my-topic",
      "tableType": "REALTIME",
      "segmentsConfig": {
        "timeColumnName": "timestamp",
        "timeType": "MILLISECONDS",
        "schemaName": "my-topic",
        "replicasPerPartition": "1"
      },
      "tenants": {},
      "tableIndexConfig": {
        "loadMode": "MMAP",
        "streamConfigs": {
          "streamType": "kafka",
          "stream.kafka.consumer.type": "LowLevel",
          "stream.kafka.topic.name": "my-topic",
          "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder",
          "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
          "stream.kafka.broker.list": "my-broker:9092",
          "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
          "stream.kafka.consumer.prop.schema.registry.url": "<https://my-registry:8000>",
          "<http://stream.kafka.consumer.prop.basic.auth.user.info|stream.kafka.consumer.prop.basic.auth.user.info>": "<redacted>",
          "security.protocol": "SSL",
          "ssl.truststore.type": "PEM",
          "ssl.truststore.certificates": "<redacted>",
          "ssl.keystore.type": "PEM",
          "ssl.keystore.certificate.chain": "<redacted>",
          "ssl.keystore.key": "<redacted>"
        }
      },
      "metadata": {
        "customConfigs": {}
      }
    }
    m
    h
    • 3
    • 10
  • j

    John Peter S

    09/06/2022, 3:52 PM
    Hi team, I started a presto server v0.234 locally with a catalog pointing to local pinot controller running in my local.
    Copy code
    connector.name=pinot
    pinot.controller-urls=localhost:9000
    Then I connected to the presto server through the presto cli. I am able to hit
    show tables
    and
    describe tables
    but when I try to get the data out like
    select count(*) from table
    it is timing out with the following error in the presto server
    Copy code
    2022-09-06T21:16:18.552+0530	ERROR	remote-task-callback-6	com.facebook.presto.execution.StageExecutionStateMachine	Stage execution 20220906_154613_00010_877h3.1.0 failed
    java.io.UncheckedIOException: java.net.SocketTimeoutException: Connect Timeout
    	at com.facebook.airlift.http.client.ResponseHandlerUtils.propagate(ResponseHandlerUtils.java:21)
    	at com.facebook.airlift.http.client.StringResponseHandler.handleException(StringResponseHandler.java:51)
    	at com.facebook.airlift.http.client.StringResponseHandler.handleException(StringResponseHandler.java:34)
    	at com.facebook.airlift.http.client.jetty.JettyHttpClient.execute(JettyHttpClient.java:512)
    	at com.facebook.presto.pinot.PinotClusterInfoFetcher.doHttpActionWithHeaders(PinotClusterInfoFetcher.java:146)
    	at com.facebook.presto.pinot.PinotBrokerPageSource.lambda$issuePqlAndPopulate$0(PinotBrokerPageSource.java:267)
    	at com.facebook.presto.pinot.PinotUtils.doWithRetries(PinotUtils.java:40)
    	at com.facebook.presto.pinot.PinotBrokerPageSource.issuePqlAndPopulate(PinotBrokerPageSource.java:253)
    	at com.facebook.presto.pinot.PinotBrokerPageSource.getNextPage(PinotBrokerPageSource.java:227)
    	at com.facebook.presto.operator.TableScanOperator.getOutput(TableScanOperator.java:251)
    	at com.facebook.presto.operator.Driver.processInternal(Driver.java:381)
    	at com.facebook.presto.operator.Driver.lambda$processFor$8(Driver.java:283)
    	at com.facebook.presto.operator.Driver.tryWithLock(Driver.java:677)
    	at com.facebook.presto.operator.Driver.processFor(Driver.java:276)
    	at com.facebook.presto.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1077)
    	at com.facebook.presto.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:162)
    	at com.facebook.presto.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:545)
    	at com.facebook.presto.$gen.Presto_0_234_1f00527____20220906_154419_1.run(Unknown Source)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: java.net.SocketTimeoutException: Connect Timeout
    	at org.eclipse.jetty.io.ManagedSelector$Connect.run(ManagedSelector.java:768)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    	... 3 more
    m
    x
    • 3
    • 12
  • t

    Tanmesh Mishra

    09/06/2022, 8:26 PM
    Will really appreciate if someone approve workflow for this PR.
    m
    • 2
    • 4
  • m

    Marc Kriguer

    09/07/2022, 8:05 PM
    A few related questions about Pinot ingesting data from a Google Cloud Bucket. Background: We were able to read in ~3 years of data from our production system that Pinot stored in 9800 ".avro" files (each is about 400 MB, so about 4 TB in total). That adds up to 2.5 billion transactions, and we were interested in measuring how the system would perform with 100 billion transactions present. Over this 3-day weekend, I created 39 copies of the .avro files (each with a different multiple of 3 years of a time shift), so that all 39 copies had unique values for the timestamp (there is just one dimension column - that time stamp - for the table in question). Over the weekend, my loop created new (gzipped) avro files with a for each record, and uploaded the 39 * 9800 ".avro.gz" files into our Google Cloud bucket that minion processes ingest from. We're only seeing a very small number of error messages, and yet very little of the generated data is present. Pinot is only seeing ~4.4 billion transactions (not even one full copy, much less 39 full copies) and it's not clear how to track down how far along the ingestion process is moving along (inside of Pinot) -- is there a way to see a list of still-pending .avro files (i.e. Pinot has seen that a new file was uploaded, but it hasn't started to import the contents yet)? (It may just be that I just don't yet know where to look.)
    ➕ 1
    h
    • 2
    • 5
  • m

    Marc Kriguer

    09/07/2022, 8:24 PM
    Q2: The pinot-minor logs contains messages like the following:
    Copy code
    2022-09-07T16:48:11.311344994Z76 START: CallbackHandler 0, INVOKE /pinot-quickstart/INSTANCES/Minion_pinot-minion-8.pinot-minion-headless.pinot-quickstart.svc.cluster.local_9514/MESSAGES listener: org.apache.helix.messaging.handling.HelixTaskExecutor@6c9febc5 type: CALLBACK
    2022-09-07T16:48:11.311376624ZCallbackHandler 0 subscribing changes listener to path: /pinot-quickstart/INSTANCES/Minion_pinot-minion-8.pinot-minion-headless.pinot-quickstart.svc.cluster.local_9514/MESSAGES, callback type: CALLBACK, event types: [NodeChildrenChanged], listener: org.apache.helix.messaging.handling.HelixTaskExecutor@6c9febc5, watchChild: false
    Info
    2022-09-07T16:48:11.311654554ZCallbackHandler0, Subscribing to path: /pinot-quickstart/INSTANCES/Minion_pinot-minion-8.pinot-minion-headless.pinot-quickstart.svc.cluster.local_9514/MESSAGES took: 0
    2022-09-07T16:48:11.311991754ZNo Messages to process
    Info
    2022-09-07T16:48:11.312021174Z76 END:INVOKE CallbackHandler 0, /pinot-quickstart/INSTANCES/Minion_pinot-minion-8.pinot-minion-headless.pinot-quickstart.svc.cluster.local_9514/MESSAGES listener: org.apache.helix.messaging.handling.HelixTaskExecutor@6c9febc5 type: CALLBACK Took: 0ms
    Does that "No Messages to process" mean that it's not seeing any records in the uploaded file?
    h
    • 2
    • 1
  • m

    Marc Kriguer

    09/07/2022, 8:25 PM
    Q3 If that is the case, could it be because it was uploaded as a .avro.gz file (instead of a .avro file)?
    h
    • 2
    • 1
  • m

    Marc Kriguer

    09/07/2022, 8:50 PM
    (Or should I just examine older pinot-minion log messages?)
  • m

    Mayank

    09/07/2022, 10:54 PM
    @Haitao Zhang can you help with these minion questions ^^
  • m

    Marc Kriguer

    09/07/2022, 11:21 PM
    It took me a little time to figure out how to "scroll back" to see log messages over the weekend; they were just into level - no warnings or errors - but otherwise very much like the above - they appeared to skip over the .avro.gz files entirely. And yet (I thought) we configured pinot-minion to accept either .avro or ,.avro.gz files:
    Copy code
    "ingestionConfig": {
          "batchIngestionConfig": {
            "batchConfigMaps": [
              {
                "inputDirURI": "<gs://pinot-ingestion/transaction>",
                "includeFileNamePattern": "glob:**/*.avro*",
                "excludeFileNamePattern": "glob:**/*.tmp",
                "inputFormat": "avro"
              }
            ],
            "segmentIngestionType": "APPEND",
            "segmentIngestionFrequency": "DAILY"
          }
        },
    h
    • 2
    • 4
  • e

    Eaugene Thomas

    09/08/2022, 8:32 AM
    Was following https://docs.pinot.apache.org/operators/tutorials/configuring-tls-ssl#http-https-multi-ingress-secure-egress , but getting the error
    Copy code
    > pinot-admin.sh StartController -zkAddress localhost:2181 -controllerPort 9000 -controllerProtocol https
    [0.002s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:gc-pinot-controller.log instead.
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/Users/teaugene/pinot/pinot-tools/target/pinot-tools-pkg/lib/groovy-all-2.4.21.jar) to method java.lang.Object.finalize()
    WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    2022/09/08 13:54:17.253 ERROR [PinotAdministrator] [main] Exception caught: 
    picocli.CommandLine$UnmatchedArgumentException: Unknown options: '-controllerProtocol', 'https'
            at picocli.CommandLine$Interpreter.validateConstraints(CommandLine.java:13143) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.parse(CommandLine.java:13095) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.processSubcommand(CommandLine.java:13343) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.processArguments(CommandLine.java:13260) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.parse(CommandLine.java:13072) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.parse(CommandLine.java:13041) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine$Interpreter.parse(CommandLine.java:12942) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at picocli.CommandLine.parseArgs(CommandLine.java:1478) ~[picocli-4.6.1.jar:task ':jar' property 'archiveVersion']
            at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:149) [pinot-tools-0.10.10-SNAPSHOT.jar:0.10.10-SNAPSHOT-9e97a5d4cc8ef2c8835825574bece4a7819c975f]
            at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:192) [pinot-tools-0.10.10-SNAPSHOT.jar:0.10.10-SNAPSHOT-9e97a5d4cc8ef2c8835825574bece4a7819c975f]
  • a

    Awadesh Kumar

    09/08/2022, 9:43 AM
    Hi, I am trying to put the
    autopurge
    config for zookeeper in pinot-release.yaml file but it doesn't seem working. Even after adding the purgeInterval =1 and snapRetainCount = 5, it's always
    autopurge.snapRetainCount=3 autopurge.purgeInterval=0
    Can anyone please help? below is the .yaml I am using for zookeeper -
    Copy code
    zookeeper:
        ## If true, install the Zookeeper chart alongside Pinot
        ## ref: <https://github.com/kubernetes/charts/tree/master/incubator/zookeeper>
          enabled: true
          urlOverride: "my-zookeeper:2181/pinot"
          port: 2181
          replicaCount: 3
          autopurge:
            purgeInterval: 1
            snapRetainCount: 5
          env:
            ## The JVM heap size to allocate to Zookeeper
            ZK_HEAP_SIZE: "256M"
            #ZOO_MY_ID: 1
          persistence:
            enabled: true
          image:
            PullPolicy: "IfNotPresent"
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 500m
              memory: 1Gi
    x
    • 2
    • 4
  • j

    John Peter S

    09/08/2022, 12:28 PM
    Hi, I am facing the following exception while uploading segments to an offline table
    Copy code
    Exception while uploading segment: Caught exception while updating ideal state for resource: SegmentMembershipTenant1_OFFLINE" while sending request: <http://localhost:9000/v2/segments?tableName=SegmentMembershipTenant1&tableName=SegmentMembershipTenant1&tableType=OFFLINE> to controller: 015461a2f6ba
    m
    • 2
    • 8
  • p

    Priyank Bagrecha

    09/08/2022, 5:00 PM
    From https://docs.pinot.apache.org/operators/operating-pinot/upgrading-pinot-cluster
    Copy code
    We recommend that you upgrade Pinot components in the following order (if you need to roll back a release, do it in the reverse order).
    Controller
    Broker
    Server
    Minion
    We are using community provided helm chart to do the deploy / upgrade. I noticed that when I tried to upgrade via
    helm upgrade pinot pinot/pinot
    it upgrades controller, broker, server and minion all at the same time. How do folks guarantee the ordering? Is it recommended to have a separate values yaml per component? We are using one for everything together in one file.
    Copy code
    $kubectl get pods -n pinot
    NAME                                 READY   STATUS            RESTARTS   AGE
    pinot-broker-0                       0/2     PodInitializing   0          6s
    pinot-broker-4                       0/2     Terminating       0          6h13m
    pinot-controller-0                   0/2     PodInitializing   0          31s
    pinot-minion-0                       0/2     PodInitializing   0          20s
    pinot-server-0                       0/2     Terminating       0          7h54m
  • s

    suraj sheshadri

    09/08/2022, 9:08 PM
    We are facing few issues below and would need some advise: 1. I am seeing this “error”: “Failed to find segment: offlinebookingnarrow_poc_OFFLINE_7016 in table: offlinebookingnarrow_poc_OFFLINE” when trying to load offline data into pinot for few segments. Any suggestions on how to fix or what might be causing it. 2. When we are loading large data into pinot, i am seeing that the query performance for existing tables degrades badly during this load process. Any suggestions on how we can ensure query performance remains same irrespective of what other processes run in background on pinot side. 3. I am using a delete segment API to delete all segments for a table. curl -X DELETE “http://controllerurl/segments/offlinebookingnarrow_poc?type=OFFLINE&amp;retention=0d” -H “accept: application/json” . We are seeing that although it deletes the segment metadata from table, the underlying data is not deleted from disc causing the disc to become full. @Priyank Bagrecha please add any more details based on your observation. Is there a way to ensure underlying data also gets deleted.
    m
    p
    • 3
    • 19
  • j

    John Peter S

    09/09/2022, 6:14 PM
    All the broker, server instances are facing the following issue randomly
    Copy code
    java.net.UnknownHostException: krux-zookeeper
    
    krux-pinot-server2  |  at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName0(InetAddress.java:1519) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1378) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1306) ~[?:?]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  | 2022/09/09 14:50:43.169 ERROR [StaticHostProvider] [Start a Pinot [SERVER]-SendThread(krux-zookeeper:2181)] Unable to resolve address: krux-zookeeper:2181
    
    krux-pinot-server2  | java.net.UnknownHostException: krux-zookeeper
    
    krux-pinot-server2  |  at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName0(InetAddress.java:1519) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1378) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1306) ~[?:?]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  | 2022/09/09 14:50:45.049 ERROR [StaticHostProvider] [Start a Pinot [SERVER]-SendThread(krux-zookeeper:2181)] Unable to resolve address: krux-zookeeper:2181
    
    krux-pinot-server2  | java.net.UnknownHostException: krux-zookeeper
    
    krux-pinot-server2  |  at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName0(InetAddress.java:1519) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1378) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1306) ~[?:?]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  |  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    
    krux-pinot-server2  | 2022/09/09 14:50:46.955 ERROR [StaticHostProvider] [Start a Pinot [SERVER]-SendThread(krux-zookeeper:2181)] Unable to resolve address: krux-zookeeper:2181
    
    krux-pinot-server2  | java.net.UnknownHostException: krux-zookeeper: Name or service not known
    
    krux-pinot-server2  |  at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName0(InetAddress.java:1519) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1378) ~[?:?]
    
    krux-pinot-server2  |  at java.net.InetAddress.getAllByName(InetAddress.java:1306) ~[?:?]
    but after restarting it is up for sometime and again restarting. I did a ping from the individual instances and the ping is working fine.
    s
    • 2
    • 3
  • j

    John Peter S

    09/12/2022, 8:12 AM
    When I try to use the query engine normally the query seems to be working. But when I try out the V2 query engine I am getting the following error
    Copy code
    [
      {
        "message": "QueryExecutionError:\njava.lang.RuntimeException: Received error query execution result block: {230=ServerTableMissing:\nFailed to find table: SegmentMembership on server: Server_172.20.0.11_8098}\n\tat org.apache.pinot.query.service.QueryDispatcher.reduceMailboxReceive(QueryDispatcher.java:133)\n\tat org.apache.pinot.query.service.QueryDispatcher.submitAndReduce(QueryDispatcher.java:73)\n\tat org.apache.pinot.broker.requesthandler.MultiStageBrokerRequestHandler.handleRequest(MultiStageBrokerRequestHandler.java:156)",
        "errorCode": 200
      }
    ]
    But I see the table is in good state with the segments present in all the configured servers including the server mentioned in the log.
    m
    r
    • 3
    • 7
  • p

    Padma Malladi

    09/12/2022, 4:47 PM
    What is the proposed approach to change the datatype of a column in the schema during production runtime?
    m
    • 2
    • 4
  • n

    Neeraja Sridharan

    09/12/2022, 9:29 PM
    Hello team 👋 Is there a recommended approach for table renaming & table swapping in Pinot? _Context_: We have created offline Pinot tables without partition based segment pruning enabled & are now on the path to implement the same (murmur based partition). Trying to identify the optimal
    Migration Path for Non-partitioned → Partitioned Table
    . Appreciate any help regarding this 🙇‍♀️
    m
    • 2
    • 3
  • d

    Deena Dhayalan

    09/13/2022, 1:18 PM
    I have tried like this https://docs.pinot.apache.org/developers/advanced/v2-multi-stage-query-engine but I get the same error Can anyone help me to fix this?
    m
    r
    • 3
    • 21
  • h

    Huaqiang He

    09/14/2022, 8:10 AM
    Hi team, can I change a column’s partialUpsertStrategy, like from IGNORE to OVERWRITE. I made the change but didn’t see the effect of OVERWRITE. The result is still from IGNORE.
    n
    n
    • 3
    • 4
  • a

    Awadesh Kumar

    09/14/2022, 9:23 AM
    Hi Team, While executing a query on the query console -
    select * from packages_test limit 10
    , it's giving different results on different execution i.e. 1 record and 10 records alternatively. Could anyone please help with this issue? Thanks
    n
    m
    • 3
    • 20
  • j

    Jatin Kumar

    09/14/2022, 10:44 AM
    Hello Is there any config in pinot that can be used to allow
    __
    in table name. Right now it is not allowed to save
    __
    in table name? What was the motivation for not allowing
    __
    Copy code
    Preconditions.checkArgument(!tableName.contains(TABLE_NAME_FORBIDDEN_SUBSTRING),
            "'tableName' cannot contain double underscore ('__')");
    n
    x
    j
    • 4
    • 5
  • j

    James Kelleher

    09/14/2022, 6:05 PM
    Hello, I’m having some trouble creating a table in an EKS cluster when I use environment variable overrides. Here’s an abridged version of the YAML I’m attempting to apply:
    Copy code
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: test-dmp-with-json-config-map
      namespace: pinot
    data:
      test_dmp_realtime_table_config.json: |-
        {
          ...
            "streamConfigs": {
              ...
              "stream.kafka.broker.list": "${CONFLUENT_BOOTSTRAP_SERVER}",
              "sasl.mechanism": "PLAIN",
              "sasl.jaas.config": "${CONFLUENT_JAAS_CONFIG}",
              "security.protocol": "SASL_SSL",
              ...
            }
          },
          ...
        }
    
      test_dmp_realtime_schema.json: |-
        {
          ...
        }
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: test-dmp-with-json-job
      namespace: pinot
    spec:
      template:
        spec:
          containers:
            - name: pinot-add-test-dmp-json
              image: apachepinot/pinot:latest
              args: [ "AddTable", "-schemaFile", "/var/pinot/test-dmp-with-json/test_dmp_realtime_schema.json", "-tableConfigFile", "/var/pinot/test-dmp-with-json/test_dmp_realtime_table_config.json", "-controllerHost", "pinot-controller", "-controllerPort", "9000", "-exec" ]
              env:
                - name: JAVA_OPTS
                  value: "-Xms4G -Xmx4G -Dpinot.admin.system.exit=true"
                - name: CONFLUENT_BOOTSTRAP_SERVER
                  valueFrom:
                    secretKeyRef:
                      name: confluent-credentials
                      key: confluent-bootstrap-server-sandbox
                      optional: false
                - name: CONFLUENT_JAAS_CONFIG
                  valueFrom:
                    secretKeyRef:
                      name: confluent-credentials
                      key: confluent-jaas-config-sandbox
                      optional: false
              volumeMounts:
                - name: test-dmp-with-json
                  mountPath: /var/pinot/test-dmp-with-json
          restartPolicy: OnFailure
          volumes:
            - name: test-dmp-with-json
              configMap:
                name: test-dmp-with-json-config-map
      backoffLimit: 100
    The error I’m getting in the created Pod is
    {"code":500,"error":"Unable to apply environment variables on json config class [org.apache.pinot.spi.config.table.TableConfig]."}
    I’m confident it has nothing to do with pulling the environment variables from a Secret, since I also tried hardcoding the credentials. Does anyone know what’s going wrong? Am I not putting the environment variables in the correct space?
    m
    n
    • 3
    • 4
  • a

    abhinav wagle

    09/14/2022, 9:57 PM
    Hello, any tips on how to use text based index using trino : https://docs.pinot.apache.org/basics/indexing/text-search-support . Do we need to register the function TEXT_MATCH for trino to understand it ?
    c
    • 2
    • 2
  • s

    Slackbot

    09/15/2022, 6:23 AM
    This message was deleted.
1...555657...166Latest