https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • n

    Nick Bowles

    02/26/2021, 8:53 PM
    This is during a
    SegmentGenerationAndPushTask
  • p

    Phúc Huỳnh

    03/01/2021, 5:31 AM
    message has been deleted
  • p

    Phúc Huỳnh

    03/01/2021, 5:34 AM
    logs
    Untitled
  • j

    Josh Highley

    03/01/2021, 6:53 PM
    KafkaStreamLevelStreamConfig ("highlevel") copies stream properties 'kafka.consumer.prop.*' to the Kafka consumer properties but KafkaPartitionLevelStreamConfig ("lowlevel") does not. Is there a reason for this?
  • j

    Josh Highley

    03/01/2021, 6:55 PM
    I believe it's the reason I can't connect to a Kafka server that uses SASL_SSL: the needed properties aren't being passed from the lowlevel realtime table config to the KafkaConsumer
  • m

    Ming Liang

    03/04/2021, 12:29 AM
    Hey, it seems there is a bug in most recent code of pinot. This kind of query will throws exception:
  • m

    Ming Liang

    03/04/2021, 12:29 AM
    But it previously works well:
  • x

    Xiang Fu

    03/04/2021, 12:30 AM
    the previous version of pinot doesn’t check on that, so it will return empty results always
  • a

    Alexander Vivas

    03/09/2021, 3:03 PM
    Guys! I wonder if you had already implemented any aggregated function that can let us take a column and then concat its values into a string, my use case is that I want to know per minute which countries saw our streaming events, but I'm needing it on a flat string instead of having it in several rows, is that possible right now? I tried doing this but it didn't work:
    CONCAT(DISTINCT(COUNTRY_CODE))
    Also tried this:
    groovy('{"returnType":"STRING","isSingleValue":true}', 'arg0.toList().join(",")', DISTINCT(COUNTRY_CODE))
    But it didn't work either
  • a

    Alexander Vivas

    03/09/2021, 4:02 PM
    It seems
    DISTINCT
    can't be used to pass multiple values into UDF's, is there any way to do so?, like in a grouped query, we have grouped all events per minute and now want to do something like that
  • a

    ayush sharma

    03/09/2021, 8:39 PM
    Hi all, I use helm and kubernetes approach to start pinot on minikube. I dont know why does my pinot-broker takes too much time and multiple restarts to start running and get into READY state. Is there any config that I am missing out here. For example,
    Copy code
    $ kubectl -n my-pinot-kube get all
    NAME                     READY   STATUS    RESTARTS   AGE
    pod/pinot-broker-0       0/1     Running   3          6m52s
    pod/pinot-controller-0   1/1     Running   1          6m51s
    pod/pinot-server-0       1/1     Running   1          6m51s
    pod/pinot-zookeeper-0    0/1     Running   2          6m51s
    Here is the log when the pinot-broker crashed and restarted.
    Copy code
    $ kubectl -n my-pinot-kube logs pinot-broker-0 -f
    2021/03/09 20:33:27.866 INFO [HelixBrokerStarter] [Start a Pinot [BROKER]] Starting Pinot broker
    2021/03/09 20:33:27.879 INFO [HelixBrokerStarter] [Start a Pinot [BROKER]] Connecting spectator Helix manager
    2021/03/09 20:34:02.569 INFO [HelixBrokerStarter] [Start a Pinot [BROKER]] Setting up broker request handler
    2021/03/09 20:34:36.312 WARN [ZKHelixManager] [ZkClient-EventThread-14-pinot-zookeeper:2181] KeeperState:Disconnected, SessionId: 100012444580000, instance: Broker_pinot-broker-0.pinot-broker-headless.my-pinot-kube.svc.cluster.local_8099, type: SPECTATOR
  • d

    Daniel Lavoie

    03/09/2021, 8:41 PM
    Broker, server and controller will crash and restart until zookeeper is ready
    👍 1
  • d

    Daniel Lavoie

    03/09/2021, 8:41 PM
    that’s “normal”
    😞 1
  • x

    Xiang Fu

    03/09/2021, 10:29 PM
    One thin we may try is the helm pre-install hook https://helm.sh/docs/topics/charts_hooks/
  • x

    Xiang Fu

    03/09/2021, 10:29 PM
    to split the installation to zk first then other components
  • a

    Alexander Vivas

    03/11/2021, 3:54 PM
    Guys, is it possible to have realtime tables in pinot streaming data from two different kafka clusters at the same time?
  • a

    Alexander Vivas

    03/11/2021, 3:56 PM
    I mean, one table per kafka cluster
  • a

    Alexander Vivas

    03/11/2021, 3:56 PM
    Not consuming from two different sources into the same table
  • d

    Daniel Lavoie

    03/11/2021, 3:56 PM
    Yes, kafka configuration is defined per table
  • a

    Alexander Vivas

    03/11/2021, 3:57 PM
    Does it use a cached schema registry? For some reason our second table is not being able to reach the second kafka cluster's schema registry
  • d

    Daniel Lavoie

    03/11/2021, 3:58 PM
    Good question to which I don’t have the answer 😕
  • d

    Daniel Lavoie

    03/11/2021, 4:00 PM
    What error are you observing?
  • a

    ayush sharma

    03/11/2021, 4:02 PM
    I am trying to implement presto using the starburst-presto docker image and connect it to existing pinot cluster. I start presto using:
    Copy code
    docker run  \
      --network pinot-demo \
      --name=presto-starburst \
      -p 8000:8080 \
      -d starburstdata/presto:350-e.3
    Then I tried adding the following pinot.properties file at each of the location one by one.
    /etc/presto/catalog/
    data/presto/etc/catalog/
    /usr/lib/presto/etc/catalog/
    Copy code
    # pinot.properties
    connector.name=pinot
    pinot.controller-urls=pinot-controller:9000
    pinot.controller-rest-service=pinot-controller:9000
    
    pinot.limit-large-for-segment=1
    pinot.allow-multiple-aggregations=true
    pinot.use-date-trunc=true
    pinot.infer-date-type-in-schema=true
    pinot.infer-timestamp-type-in-schema=true
    I get the following errors each time:
    Copy code
    6 errors
    io.airlift.bootstrap.ApplicationConfigurationException: Configuration errors:
    
    1) Error: Configuration property 'pinot.allow-multiple-aggregations' was not used
    
    2) Error: Configuration property 'pinot.controller-rest-service' was not used
    
    3) Error: Configuration property 'pinot.infer-date-type-in-schema' was not used
    
    4) Error: Configuration property 'pinot.infer-timestamp-type-in-schema' was not used
    
    5) Error: Configuration property 'pinot.limit-large-for-segment' was not used
    
    6) Error: Configuration property 'pinot.use-date-trunc' was not used
    
    6 errors
        at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:239)
        at io.prestosql.pinot.PinotConnectorFactory.create(PinotConnectorFactory.java:72)
        at io.prestosql.connector.ConnectorManager.createConnector(ConnectorManager.java:354)
        at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:211)
        at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:203)
        at io.prestosql.connector.ConnectorManager.createCatalog(ConnectorManager.java:189)
        at io.prestosql.metadata.StaticCatalogStore.loadCatalog(StaticCatalogStore.java:88)
        at io.prestosql.metadata.StaticCatalogStore.loadCatalogs(StaticCatalogStore.java:68)
        at io.prestosql.server.Server.doStart(Server.java:119)
        at io.prestosql.server.Server.lambda$start$0(Server.java:73)
        at io.prestosql.$gen.Presto_350_e_3____20210311_155257_1.run(Unknown Source)
        at io.prestosql.server.Server.start(Server.java:73)
        at com.starburstdata.presto.StarburstPresto.main(StarburstPresto.java:48)
    Any suggestion, what I could be doing wrong?
  • k

    Kishore G

    03/11/2021, 4:05 PM
    @Elon might be able to help you
  • k

    Kishore G

    03/11/2021, 4:05 PM
    Some of those config are old and probably specific to prestodb not trino
  • e

    Elon

    03/11/2021, 4:11 PM
    Hi, yes, for information on the starburst pinot connector I would ask in the trino #C011C9JHN7R slack. Those properties are not in the trino pinot connector.
  • d

    Daniel Lavoie

    03/11/2021, 4:17 PM
    Unless the decoder is used by all tables, each one should have its own cache schema registry
  • a

    Alexander Vivas

    03/12/2021, 2:00 PM
    Guys, we have an error in our controller instances, we get this message which we think might be preventing the kafka consumer to work properly:
    Copy code
    Got unexpected instance state map: {Server_mls-pinot-server-1.mls-pinot-server-headless.production.svc.cluster.local_8098=ONLINE, Server_mls-pinot-server-2.mls-pinot-server-headless.production.svc.cluster.local_8098=ONLINE} for segment: dpt_video_event_captured_v2__0__22__20210306T1745Z
    Would you please tell me what can cause this issue?
  • a

    Alexander Vivas

    03/12/2021, 2:11 PM
    This is what follows to that log entry
  • a

    Alexander Vivas

    03/12/2021, 2:19 PM
    Ah... It seems I'm in some kind of weird scenario not currently being handled by the validation manager
1...144145146...166Latest