https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • d

    Dan DC

    03/02/2022, 4:09 PM
    hello, I've run into an issue where someone has scaled up our pinot servers, left the new nodes for a while and then just removed the nodes back from the cluster - the API to remove the instances was never invoked and so was table rebalancing. Some segments were assigned to the new nodes and after these nodes were removed the tables went into bad state. The current state is that queries are not reading the segments in bad state. The table external view only refers to active nodes but some segments point at the removed nodes in the ideal state. The cluster still list the removed nodes when the instances API is called. I've try few things but I can't get the tables to be back in good state: I've rebalanced the table using different options, I've disabled the removed nodes in pinot and rebalanced the tables again, I've rebalanced the servers, but none of these have worked so far. I wonder if someone could let me know the steps to fix the issue
    m
    • 2
    • 29
  • s

    Somu

    03/02/2022, 4:18 PM
    Hi all. I am facing the below issue while creating schema or table executing by CLI from pods. I have deployed Pinot in K8 using helm charts. Kindly help me to fix this issue.
    m
    r
    • 3
    • 7
  • s

    Shivam Sajwan

    03/04/2022, 5:24 AM
    hi everyone, I am storing 0 as a value in a column but on pinot UI it is showing -- instead of 0. And when I am fetching the data using where value = 0 the rows are getting fetched. Any idea on this?
    m
    • 2
    • 4
  • t

    Tony Requist

    03/05/2022, 1:13 AM
    We are experimenting with Ingestion Transformations, specifically having a filterConfig. It works as expected, but there seems to be no way change the filterConfig. Running
    pinot-admin.sh AddTable
    with a changed filterConfig updates the table description shown in the UI, but does not reflect in the data actually ingested; we had to drop and recreate the table, which workable for testing but not later. Is there a way to get Pinot to actually use the new filterConfig?
    k
    • 2
    • 3
  • e

    Elon

    03/05/2022, 2:05 AM
    If a table is taking a long time to delete (due to server crashes - this is staging) is it safe to delete from zk, remove the segment files from disk and deepstore. Should restart the controllers, brokers and servers that the segment was on?
    m
    • 2
    • 3
  • m

    Mathieu Druart

    03/05/2022, 9:46 PM
    Hi ! We are trying to use lookup join to get a multivalued String column from a small dimension table but we have this exception during request :
    Copy code
    "message": "QueryExecutionError:\njava.lang.ClassCastException: class [Ljava.lang.Object; cannot be cast to class [Ljava.lang.String; ([Ljava.lang.Object; and [Ljava.lang.String; are in module java.base of loader 'bootstrap')\n\tat org.apache.pinot.core.operator.transform.function.LookupTransformFunction.transformToStringValuesMV(LookupTransformFunction.java:328)\n\tat org.apache.pinot.core.operator.docvalsets.TransformBlockValSet.getStringValuesMV(TransformBlockValSet.java:125)\n\tat org.apache.pinot.core.common.RowBasedBlockValueFetcher.createFetcher(RowBasedBlockValueFetcher.java:81)\n\tat org.apache.pinot.core.common.RowBasedBlockValueFetcher.<init>(RowBasedBlockValueFetcher.java:32)",
    If you have any idea, thank you !
    m
    r
    • 3
    • 17
  • f

    francoisa

    03/07/2022, 10:02 AM
    Hi I’m struggling to get a JSON col in a string way to apply a JSONIndex on it. I’ve tried several things and I keep having a null value in the col 😞 Schema extract
    {
    "name": "fulldata",
    "dataType": "STRING",
    "maxLength": 2147483647
    }
    Table config extract
    {
    "columnName": "fulldata",
    "transformFunction": "JSONFORMAT(meta)"
    }
    m
    k
    +2
    • 5
    • 48
  • s

    Sevvy Yusuf

    03/07/2022, 3:09 PM
    Hi team 👋 I'm writing a custom Trino connector to read from our Pinot cluster and I'm running into an issue when trying to use
    LASTWITHTIME
    . The first time I run the query (
    SELECT * FROM catalog.schema."SELECT LASTWITHTIME(dataColumnName, timeColumnName, 'STRING') FROM tableName WHERE column = 'someColumnValue' LIMIT 10";
    ) I get the following error:
    Copy code
    java.lang.ExceptionInInitializerError
    	at org.apache.pinot.common.function.TransformFunctionType.getTransformFunctionType(TransformFunctionType.java:106)
    Any consecutive runs of the query produce the following error:
    Copy code
    java.lang.NoClassDefFoundError: Could not initialize class org.apache.pinot.common.function.FunctionRegistry
    	at org.apache.pinot.common.function.TransformFunctionType.getTransformFunctionType(TransformFunctionType.java:106)
    Other queries, including dynamic queries work okay. I am using Pinot version 0.9.3. Has anyone come across this before? Any help would be appreciated, many thanks in advance!
    m
    r
    +3
    • 6
    • 37
  • k

    Kishore G

    03/07/2022, 3:31 PM
    I think the LASTWITHTIME udf was added recently and is not part of the 0.9.3.. I might be wrong
    s
    l
    • 3
    • 2
  • d

    Dan DC

    03/08/2022, 2:22 PM
    How can I repair a segment external view? The segment is allocated to 2 servers out of 3 and is offline for both, the file exists in deep storage and the ideal view looks alright. I've tried reset and reload, I've also restarted the controllers and servers but the state won't change
    n
    • 2
    • 5
  • f

    francoisa

    03/08/2022, 5:06 PM
    Facing another issue in the way to move realtime segments to offline segment. Minion throw away
    Copy code
    java.lang.IllegalArgumentException: null
    	at shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:108) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.segment.spi.creator.name.SimpleSegmentNameGenerator.generateSegmentName(SimpleSegmentNameGenerator.java:53) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl.handlePostCreation(SegmentIndexCreationDriverImpl.java:268) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl.build(SegmentIndexCreationDriverImpl.java:258) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.core.segment.processing.framework.SegmentProcessorFramework.process(SegmentProcessorFramework.java:150) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.plugin.minion.tasks.realtimetoofflinesegments.RealtimeToOfflineSegmentsTaskExecutor.convert(RealtimeToOfflineSegmentsTaskExecutor.java:164) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:135) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:58) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.runInternal(TaskFactoryRegistry.java:111) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.run(TaskFactoryRegistry.java:88) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at org.apache.helix.task.TaskRunner.run(TaskRunner.java:71) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
    	at java.lang.Thread.run(Thread.java:831) [?:?]
    Seems to be a SegmentNameGenerator but no way to get it working 😕
    Copy code
    "task": {
          "taskTypeConfigsMap": {
            "RealtimeToOfflineSegmentsTask": {
              "bucketTimePeriod": "6m",
              "bufferTimePeriod": "1h",
              "roundBucketTimePeriod": "10m",
              "mergeType": "concat",
              "maxNumRecordsPerSegment": "1000",
              "schedule": "* 0/10 * * * ?"
            }
          }
        },
    Task config. FY all my row have the same timestamp value (for testing purpose only)
    r
    • 2
    • 9
  • w

    Weixiang Sun

    03/08/2022, 8:52 PM
    I made the change to the table configuration for realtime table. But it does not take effect. Do I need reload all the segments?
    s
    m
    m
    • 4
    • 10
  • l

    Luis Fernandez

    03/08/2022, 9:45 PM
    hey my friends! I made a change to my zookeeper cluster, that required the entire cluster (3 nodes) to be restarted, for some reason after it got restarted for this change, (we made a change to SSD) we started getting this
    Copy code
    2022-03-08 21:36:28,798 [myid:1] - INFO  [NIOWorkerThread-1:ZooKeeperServer@1032] - Refusing session request for client /10.12.36.35:34854 as it has seen zxid 0x100000709 our last zxid is 0x0 client must try another server
    everywhere in the pinot cluster, same change was applied to our dev env but it didn’t do anything do you all know what may have caused this and how could we recover from this?
    k
    m
    d
    • 4
    • 40
  • y

    Yash Agarwal

    03/09/2022, 11:16 AM
    I have 20 data nodes in my cluster, but all the queries are only using 19 nodes. All the 20 nodes are enabled and have segments assigned. This is also happening for all the tables, what can i do to troubleshoot ?
    😶 1
    m
    • 2
    • 1
  • a

    Awadesh Kumar

    03/09/2022, 1:00 PM
    Hi all, I deleted all the segments from a pinot table using below endpoint:
    http://{base_url}/segments/trips?type=REALTIME&retention=0d
    Now Pinot table has stopped receiving the data from the kafka topic. We didn't change anything in table configuration. Any possible reason for this?
    👀 1
    m
    n
    • 3
    • 8
  • d

    Dan DC

    03/10/2022, 12:58 PM
    Hello, I'm upgrading my cluster from 0.8.0 to 0.9.3. I have hybrid tables with upsert that are incompatible with 0.9.3. There is a constrain that a realtime table cannot have RealtimeToOfflineSegmentsTask configured, what the options for migrating? Realtime tables can't be backfilled easily/quickly
    m
    • 2
    • 4
  • s

    Shailee Mehta

    03/10/2022, 4:04 PM
    Hello there, I am setting up apache Pinot for the first time. I have setup the zookeeper, broker, controller and server. Not when I try to run AddTable container proc. using docker, it returns with a null pointer exception. Can someone help me debug this issue. Command:
    Copy code
    docker run \
        --network=pinot-demo_default \
        --name pinot-batch-table-creation \
        -v /home/shailee/projects/DS/data/lineorder_offline.json:/lineorder_offline.json \
        -v /home/shailee/projects/DS/data/lineorder.json:/lineorder.json \
        apachepinot/pinot:latest AddTable \
        -schemaFile /lineorder_offline.json \
        -tableConfigFile  /lineorder.json \
        -controllerHost manual-pinot-controller \
        -controllerPort 9000 \
        -exec
    Error in controller:
    Copy code
    ERROR [WebApplicationExceptionMapper] [grizzly-http-server-4] Server error: 
    java.lang.NullPointerException: null
    	at java.util.Objects.requireNonNull(Objects.java:221) ~[?:?]
    	at java.util.Optional.<init>(Optional.java:107) ~[?:?]
    	at java.util.Optional.of(Optional.java:120) ~[?:?]
    	at org.apache.pinot.controller.api.access.AccessControlUtils.validatePermission(AccessControlUtils.java:48) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.apache.pinot.controller.api.resources.PinotSchemaRestletResource.addSchema(PinotSchemaRestletResource.java:194) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
    	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    	at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:353) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a]
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    n
    m
    • 3
    • 11
  • l

    Luis Fernandez

    03/10/2022, 7:50 PM
    Hey friends, we are starting to issue queries in our production cluster and we are seeing the following
    Failed to find time boundary info for hybrid table
    anyone know if this is bad? have more info about it?
    m
    • 2
    • 8
  • t

    Tony Requist

    03/10/2022, 11:55 PM
    I have a table showing BAD because a handful of segments are only on one of two servers. I am trying to "rebalance servers" to fix and I see
    "status": "IN_PROGRESS"
    but nothing in the controller logs other than
    Copy code
    INFO [CustomRebalancer] [HelixController-pipeline-default-pinot-(3cd60663_DEFAULT)] Computing BestPossibleMapping for node_reboot_events_REALTIME
    and
    Copy code
    WARN [SegmentStatusChecker] [pool-10-thread-4] Table node_reboot_events_REALTIME has 1 replicas, below replication threshold :2
    What status should I expect to see?
    m
    • 2
    • 11
  • s

    srishti bhargava

    03/11/2022, 6:03 AM
    I am trying to access this page given in the setup information -- https://docs.pinot.apache.org/basics/getting-started/advanced-pinot-setup but looks like this doesnt exist anymore. Is there any other page that I cam refer to
    k
    • 2
    • 1
  • p

    Prashant Pandey

    03/11/2022, 1:33 PM
    Hi team. I am facing a peculiar issue right now with one of our realtime servers. This realtime server consumes from two tables, as can be seen from its node data:
    Copy code
    {
      "id": "backend_entity_view_REALTIME",
      "simpleFields": {
        "BUCKET_SIZE": "0",
        "SESSION_ID": "30162be05be0043",
        "STATE_MODEL_DEF": "SegmentOnlineOfflineStateModel",
        "STATE_MODEL_FACTORY_NAME": "DEFAULT"
      },
      "mapFields": {
        "backend_entity_view__6__1770__20220311T1126Z": {
          "CURRENT_STATE": "CONSUMING",
          "END_TIME": "1646997978968",
          "INFO": "",
          "PREVIOUS_STATE": "OFFLINE",
          "START_TIME": "1646997978753",
          "TRIGGERED_BY": "*"
        }
      },
      "listFields": {}
    }
    
    {
      "id": "service_call_view_REALTIME",
      "simpleFields": {
        "BUCKET_SIZE": "0",
        "SESSION_ID": "30162be05be0043",
        "STATE_MODEL_DEF": "SegmentOnlineOfflineStateModel",
        "STATE_MODEL_FACTORY_NAME": "DEFAULT"
      },
      "mapFields": {
        "service_call_view__4__1268__20220311T1227Z": {
          "CURRENT_STATE": "ONLINE",
          "END_TIME": "1647004152026",
          "INFO": "",
          "PREVIOUS_STATE": "CONSUMING",
          "START_TIME": "1647004133127",
          "TRIGGERED_BY": "*"
        },
        "service_call_view__4__1269__20220311T1308Z": {
          "CURRENT_STATE": "CONSUMING",
          "END_TIME": "1647004133319",
          "INFO": "",
          "PREVIOUS_STATE": "OFFLINE",
          "START_TIME": "1647004133127",
          "TRIGGERED_BY": "*"
        }
      },
      "listFields": {}
    }
    
    {
      "id": "span_event_view_1_REALTIME",
      "simpleFields": {
        "BUCKET_SIZE": "0",
        "SESSION_ID": "30162be05be0043",
        "STATE_MODEL_DEF": "SegmentOnlineOfflineStateModel",
        "STATE_MODEL_FACTORY_NAME": "DEFAULT"
      },
      "mapFields": {
        "span_event_view_1__1__9751__20220309T1444Z": {
          "CURRENT_STATE": "OFFLINE",
          "END_TIME": "1646837055817",
          "INFO": "",
          "PREVIOUS_STATE": "CONSUMING",
          "START_TIME": "1646837055782",
          "TRIGGERED_BY": "*"
        },
        "span_event_view_1__1__9865__20220311T1302Z": {
          "CURRENT_STATE": "ONLINE",
          "END_TIME": "1647004903102",
          "INFO": "",
          "PREVIOUS_STATE": "CONSUMING",
          "START_TIME": "1647004896155",
          "TRIGGERED_BY": "*"
        },
        "span_event_view_1__13__9635__20220311T1303Z": {
          "CURRENT_STATE": "CONSUMING",
          "END_TIME": "1647003820644",
          "INFO": "",
          "PREVIOUS_STATE": "OFFLINE",
          "START_TIME": "1647003820427",
          "TRIGGERED_BY": "*"
        },
        "span_event_view_1__1__9866__20220311T1321Z": {
          "CURRENT_STATE": "CONSUMING",
          "END_TIME": "1647004896393",
          "INFO": "",
          "PREVIOUS_STATE": "OFFLINE",
          "START_TIME": "1647004896155",
          "TRIGGERED_BY": "*"
        }
      },
      "listFields": {}
    }
    The server is consuming from all partitions of
    span_event_view_1_REALTIME
    just fine, but the lag in just this partition (partition 6) of
    backend_entity_view_REALTIME
    is continually increasing. I checked the controller logs, and see a whole lot of:
    Copy code
    2022/03/11 13:20:53.141 WARN [ConsumerConfig] [grizzly-http-server-0] The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
    2022/03/11 13:20:53.397 WARN [TopStateHandoffReportStage] [HelixController-pipeline-default-pinot-prod-(0ff7d49b_DEFAULT)] Event 0ff7d49b_DEFAULT : Cannot confirm top state missing start time. Use the current system time as the start time.
    2022/03/11 13:21:36.012 WARN [TopStateHandoffReportStage] [HelixController-pipeline-default-pinot-prod-(d95795b6_DEFAULT)] Event d95795b6_DEFAULT : Cannot confirm top state missing start time. Use the current system time as the start time.
    2022/03/11 13:21:59.914 WARN [ZkBaseDataAccessor] [HelixController-pipeline-default-pinot-prod-(6831a128_DEFAULT)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-1.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/b43eba4e-58fb-4fae-bb8e-72bcd97ed0ea=-101}
    2022/03/11 13:22:00.411 WARN [TopStateHandoffReportStage] [HelixController-pipeline-default-pinot-prod-(ed7c9add_DEFAULT)] Event ed7c9add_DEFAULT : Cannot confirm top state missing start time. Use the current system time as the start time.
    2022/03/11 13:22:00.899 WARN [TaskGarbageCollectionStage] [TaskJobPurgeWorker-pinot-prod] ResourceControllerDataProvider or HelixManager is null for event 7ed61473_TASK(CurrentStateChange) in cluster pinot-prod. Skip TaskGarbageCollectionStage.
    2022/03/11 13:22:12.082 ERROR [ZkBaseDataAccessor] [grizzly-http-server-2] paths is null or empty
    2022/03/11 13:23:09.694 WARN [ZkBaseDataAccessor] [HelixController-pipeline-default-pinot-prod-(6e6a3d4e_DEFAULT)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-1.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/3f8e92c9-7eb4-49ca-91d1-caa9e868e071=-101}
    2022/03/11 13:23:09.694 WARN [ZkBaseDataAccessor] [HelixController-pipeline-task-pinot-prod-(6e6a3d4e_TASK)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-1.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/3f8e92c9-7eb4-49ca-91d1-caa9e868e071=-101}
    2022/03/11 13:23:10.248 WARN [TopStateHandoffReportStage] [HelixController-pipeline-default-pinot-prod-(9f5bd9ba_DEFAULT)] Event 9f5bd9ba_DEFAULT : Cannot confirm top state missing start time. Use the current system time as the start time.
    2022/03/11 13:23:20.224 WARN [ZkBaseDataAccessor] [HelixController-pipeline-task-pinot-prod-(28af3294_TASK)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-1.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/0ebed9b5-51b6-41fe-b5df-a8d69c1b717b=-101}
    2022/03/11 13:23:20.224 WARN [ZkBaseDataAccessor] [HelixController-pipeline-default-pinot-prod-(28af3294_DEFAULT)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-1.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/0ebed9b5-51b6-41fe-b5df-a8d69c1b717b=-101}
    2022/03/11 13:23:20.905 WARN [TopStateHandoffReportStage] [HelixController-pipeline-default-pinot-prod-(b4f04a02_DEFAULT)] Event b4f04a02_DEFAULT : Cannot confirm top state missing start time. Use the current system time as the start time.
    2022/03/11 13:25:05.373 WARN [ZkBaseDataAccessor] [HelixController-pipeline-default-pinot-prod-(364d2339_DEFAULT)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-0.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/aae2fb99-0322-4568-b410-88fddd305fd9=-101}
    2022/03/11 13:25:05.373 WARN [ZkBaseDataAccessor] [HelixController-pipeline-task-pinot-prod-(364d2339_TASK)] Fail to read record for paths: {/pinot-prod/INSTANCES/Broker_broker-0.broker-headless.pinot.svc.cluster.local_8099/MESSAGES/aae2fb99-0322-4568-b410-88fddd305fd9=-101}
    The server has only one conspicuous warning:
    Copy code
    2022/03/11 12:57:12.755 ERROR [ServerSegmentCompletionProtocolHandler] [span_event_view_1__1__9864__20220311T1243Z] Could not send request <http://controller-0.controller-headless.pinot.svc.cluster.local:9000/segmentConsumed?reason=rowLimit&streamPartitionMsgOffset=28119323382&instance=Server_server-realtime-10.server-realtime-headless.pinot.svc.cluster.local_8098&offset=-1&name=span_event_view_1__1__9864__20220311T1243Z&rowCount=9138843&memoryUsedBytes=3188798201>
    java.net.SocketTimeoutException: Read timed out
    	at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?]
    	at java.net.SocketInputStream.socketRead(SocketInputStream.java:115) ~[?:?]
    	at java.net.SocketInputStream.read(SocketInputStream.java:168) ~[?:?]
    	at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
    	at shaded.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at shaded.org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    2022/03/11 12:57:12.756 ERROR [LLRealtimeSegmentDataManager_span_event_view_1__1__9864__20220311T1243Z] [span_event_view_1__1__9864__20220311T1243Z] Holding after response from Controller: {"offset":-1,"streamPartitionMsgOffset":null,"buildTimeSec":-1,"isSplitCommitType":false,"status":"NOT_SENT"}
    The server’s resource usage is well under limits. Any idea what might be going on here?
    m
    k
    s
    • 4
    • 45
  • m

    Mayank

    03/11/2022, 2:41 PM
    The last warning is for partition 1, so likely not related to partition 6 lag
    p
    • 2
    • 1
  • a

    Aaron Weiss

    03/11/2022, 3:59 PM
    Hey, just started playing with Trino as I have a need to do subqueries / joins on Pinot tables. I'm having trouble with array fields (singleValueField: false) in Pinot when querying through Trino. From reading through the connector documentation, it seems to support arrays. Here is my Pinot query that works (service is String array field):
    Copy code
    select service, count(*) from immutable_unified_events group by service limit 10
    I've tried this query in Trino using basic and passthrough syntax, but get the same error either way.
    Copy code
    class java.lang.String cannot be cast to class java.util.List
    Trino standard query:
    Copy code
    select service, count(*) from pinot.default.immutable_unified_events group by service limit 10;
    Trino passthrough query:
    Copy code
    select * from pinot.default."select service, count(*) from immutable_unified_events group by service limit 10";
    e
    j
    +2
    • 5
    • 91
  • w

    Weixiang Sun

    03/11/2022, 8:43 PM
    Does any see the following exception with the query against upsert table? I do not see the problem with offline table.
    Copy code
    Caused by: java.lang.IllegalArgumentException: The datetime zone id 'America/Los_Angeles' is not recognised
            at org.joda.time.DateTimeZone.forID(DateTimeZone.java:247) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-d53965c35d75bff2fbe92706129cac9ca563aac3]
            at org.apache.pinot.common.function.scalar.DateTimeFunctions.year(DateTimeFunctions.java:335) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-d53965c35d75bff2fbe92706129cac9ca563aac3]
            at jdk.internal.reflect.GeneratedMethodAccessor1636.invoke(Unknown Source) ~[?:?]
            at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
            at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
            at org.apache.pinot.common.function.FunctionInvoker.invoke(FunctionInvoker.java:128) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-d53965c35d75bff2fbe92706129cac9ca563aac3]
            ... 19 more
    x
    e
    • 3
    • 13
  • s

    srishti bhargava

    03/12/2022, 8:27 PM
    I am not able to access the Pinot Console
    m
    n
    • 3
    • 19
  • d

    Diana Arnos

    03/14/2022, 9:55 AM
    Hi there! Anyone knows what the exception
    Copy code
    java.lang.RuntimeException: Caught exception while running CombinePlanNode.
    at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:146)
    means? Full exception on thread.
    m
    m
    • 3
    • 7
  • b

    Bordin Suwannatri

    03/15/2022, 3:34 PM
    hi anyone can show me example config realtime table with kafka sasl_ssl.
    x
    • 2
    • 60
  • f

    francoisa

    03/16/2022, 2:01 PM
    Hi. I’ve found an interesting thing but not sure it’s a good practice 😕 I’m still in POC and my kafka have crashed -> So my new message offset are reseted at 0. But pinot keep looking for older offsets. I’ve searched and found that It was not possible to reset an offset on pinot side. I’ve tried several things un-sucessfully before doing the following : 1 Disable the realtime table 2 Find out in zookeeper the consuming segments 3 Set the offset values to 0 for the segment of step 2 4 Enable the table And the consumption resume as I expected without data loss and only a small downtime. Is that a good practice or a least a workaround before new dev around this topic as read in the designs doc ? 😉
    m
    s
    +2
    • 5
    • 9
  • s

    Stuart Millholland

    03/16/2022, 4:53 PM
    So playing with the Dynamic tables section on the Trino Pinot connector here. I'm finding that in order to filter on a column it has to be in the SELECT portion within the double quotes. I'm wondering if the documentation needs to be adjusted there, or I could also be doing something wrong.
    m
    e
    • 3
    • 5
  • s

    Stuart Millholland

    03/16/2022, 5:03 PM
    Another question on the Trino Pinot connector. We have a schema definition, and here are a couple sample fields:
    • 1
    • 5
1...353637...166Latest