https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • l

    Lee Wei Hern Jason

    01/18/2023, 7:16 AM
    Hi Team, currently i am getting exceptions when the server tries to download from its peers when testing the download from peer servers. I did follow the doc and set
    "peerSegmentDownloadScheme": "http"
    in my table config.
    what did i do: I have replication factor to be 2. I tried deleting 1 segment in deep store and on 1 server. When i reset the segment, the segement fails to download from peer. Logs: Solved: Server need to talk to other servers via admin port for peer download to work
    👍 1
  • s

    Shubham Kumar

    01/18/2023, 10:22 AM
    Hi Team, we are exploring filterConfigs for filtering records while ingesting data from kafka. Here is the filterConfig used :
    Copy code
    "filterConfig": {
            "filterFunction": "event_name != 'PL_Basic_Page_Lands' OR source != 'MAS'"
          }
    schema for the table :
    Copy code
    {
      "schemaName": "dummy_table_schema",
      "dimensionFieldSpecs": [
        {
          "name": "app_name",
          "dataType": "STRING"
        },
        {
          "name": "device_id",
          "dataType": "STRING"
        },
        {
          "name": "advertising_id",
          "dataType": "STRING"
        },
        {
          "name": "manufacturer",
          "dataType": "STRING"
        },
        {
          "name": "os",
          "dataType": "STRING"
        },
        {
          "name": "event_name",
          "dataType": "STRING"
        },
        {
          "name": "source",
          "dataType": "STRING"
        },
        {
          "name": "customer_id",
          "dataType": "STRING"
        },
        {
          "name": "session",
          "dataType": "STRING",
          "defaultNullValue": "default_session"
        },
        {
          "name": "network_type",
          "dataType": "STRING"
        },
        {
          "name": "location_longitude",
          "dataType": "DOUBLE",
          "defaultNullValue": 0
        },
        {
          "name": "location_latitude",
          "dataType": "DOUBLE",
          "defaultNullValue": 0
        }
      ],
      "dateTimeFieldSpecs": [
        {
          "name": "timestamp",
          "dataType": "LONG",
          "format": "1:MILLISECONDS:EPOCH",
          "granularity": "15:MINUTES"
        },
        {
          "name": "event_ts",
          "dataType": "LONG",
          "format": "1:MILLISECONDS:EPOCH",
          "granularity": "1:SECONDS"
        },
        {
          "name": "event_ts_hr",
          "dataType": "INT",
          "format": "EPOCH|HOURS",
          "granularity": "1:HOURS"
        },
        {
          "name": "event_ts_mins",
          "dataType": "LONG",
          "format": "EPOCH|MINUTES",
          "granularity": "1:MINUTES"
        },
        {
          "name": "event_ts_days",
          "dataType": "INT",
          "format": "EPOCH|DAYS",
          "granularity": "1:DAYS"
        },
        {
          "name": "event_date",
          "dataType": "STRING",
          "format": "SIMPLE_DATE_FORMAT|yyyy-MM-dd|IST",
          "granularity": "1:DAYS"
        }
      ]
    }
    We are running into this error :
    Copy code
    Caught exception while transforming the record: {
    "nullValueFields" : [ ],
    "fieldToValueMap" : {
    "app" : null,
    "device_id" : "null",
    "os" : "def_os",
    "session" : "null",
    "location_latitude" : 0.0,
    "advertising_id" : "null",
    "source" : "Litmus",
    "manufacturer" : "def",
    "network" : null,
    "event_ts_mins" : 27900609,
    "app_name" : "def_name",
    "event_ts" : 1674036563440,
    "event_date" : "2023-01-18",
    "event_ts_hr" : 465010,
    "event_name" : "litmus-experiment-event",
    "location" : null,
    "event_ts_days" : 19375,
    "customer_id" : "dce52f80******6c13",
    "network_type" : "def_type",
    "device" : null,
    "user" : {
    "customer_id" : "dce52f80*******6c13"
    },
    "location_longitude" : 0.0,
    "events" : [ {
    "event_name" : "litmus-experiment-event",
    "attributes" : {
    "result" : "true",
    "experimentId" : "6ffd3********cad24",
    "experimentName" : "journey-revamp-experiment"
    },
    "timestamp" : 1674036563440
    } ],
    "timestamp" : null
    }
    }
    java.lang.RuntimeException: Caught exception while executing function: notEquals(event_name,'PL_Basic_Page_Lands')
    at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:230) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$OrExecutionNode.execute(InbuiltFunctionEvaluator.java:148) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator.evaluate(InbuiltFunctionEvaluator.java:105) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.recordtransformer.FilterTransformer.transform(FilterTransformer.java:46) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.recordtransformer.CompositeTransformer.transform(CompositeTransformer.java:83) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.segment.creator.TransformPipeline.processPlainRow(TransformPipeline.java:97) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.segment.creator.TransformPipeline.processRow(TransformPipeline.java:92) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:556) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:430) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:623) [pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at java.lang.Thread.run(Thread.java:829) [?:?]
    Caused by: java.lang.NumberFormatException: For input string: "litmus-experiment-event"
    at jdk.internal.math.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2054) ~[?:?]
    at jdk.internal.math.FloatingDecimal.parseDouble(FloatingDecimal.java:110) ~[?:?]
    at java.lang.Double.parseDouble(Double.java:543) ~[?:?]
    at org.apache.pinot.common.utils.PinotDataType$11.toDouble(PinotDataType.java:621) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.common.utils.PinotDataType$8.convert(PinotDataType.java:477) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.common.utils.PinotDataType$8.convert(PinotDataType.java:425) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.common.function.FunctionInvoker.convertTypes(FunctionInvoker.java:110) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:227) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    ... 10 more
    event_name
    is a string field but why is it getting parsed to double as suggested from the server logs?
    x
    k
    +2
    • 5
    • 7
  • p

    Pratik Bhadane

    01/18/2023, 10:35 AM
    Hello Team, How can we write sum window function in Pinot? I have tried with V2 engine but getting timeout error message
  • h

    Hans Brandl

    01/18/2023, 1:45 PM
    Hi everyone,
  • h

    Hans Brandl

    01/18/2023, 1:49 PM
    Hi Everyone, i'm new to Pinot and still learning... I created a table from a kafka topic. Querying the table works just fine if i do a "Select * FROM ..", But if i do an Aggregation with "SELECT sum (field_name) ..." or just "SELECT field_name FROM ..." i get the following Error: "There are 11 invalid segment/s. This usually means that they were created with an older schema. Please reload the table in order to refresh these segments to the new schema.". Also reloading the segments didn't help. Any Ideas? Solved: had to write column names case sensitive ...😀
    l
    • 2
    • 2
  • a

    Ashwin Raja

    01/18/2023, 6:27 PM
    howdy! we're running Pinot v0.11; i have a 1.5 TB table composed of 2000 segments; 3 of them are failing to be loaded by servers (they're in the
    ERROR
    state) with the stacktrace in this thread
    m
    j
    • 3
    • 30
  • e

    Ethan Schnitzer

    01/18/2023, 6:48 PM
    Hey all - new to pinot trying to run the quickstart
    ./bin/pinot-admin.sh QuickStart -type batch
    and getting this error message:
    Copy code
    Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @1796cf6c
            at java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) ~[?:?]
            at java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) ~[?:?]
            at java.lang.reflect.Method.checkCanSetAccessible(Method.java:199) ~[?:?]
            at java.lang.reflect.Method.setAccessible(Method.java:193) ~[?:?]
            at com.sun.xml.bind.v2.runtime.reflect.opt.Injector$1.run(Injector.java:177) ~[pinot-parquet-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at com.sun.xml.bind.v2.runtime.reflect.opt.Injector$1.run(Injector.java:174) ~[pinot-parquet-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            at java.security.AccessController.doPrivileged(AccessController.java:318) ~[?:?]
            at com.sun.xml.bind.v2.runtime.reflect.opt.Injector.<clinit>(Injector.java:172) ~[pinot-parquet-0.11.0-shaded.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
            ... 70 more
    After further digging online/SO, the initial solution was that my JDK is out of date. However, on my os I’m running
    java version "17.0.1" 2021-10-19 LTS
    on my M1. Will be digging further online but if anyone has had this previous issue on an M1 please let me know! Thanks. 11:55am MST UPDATE: Looks like Pinot doesn’t support JDK 17+, is that the case?
    t
    • 2
    • 1
  • l

    Lee Wei Hern Jason

    01/19/2023, 3:07 AM
    Hi Team: We faced a surge in error segments (have 1 replica down an 1 replica online) in our cluster. When we run the reset api on the table. It cause alot of segments to be in bad state (2 replica down). 1. I checked in the local servers and the segments available on the servers. Do yall know whats the root cause of this ? Bad segments e.g.
    transportSurgeMetric__2__314__20230116T1249Z
    logs when running the reset api for 1 segment:
    Copy code
    [SegmentStatusChecker] [pool-8-thread-4] Segment transportSurgeMetric__2__314__20230116T1249Z of table transportSurgeMetric_REALTIME has no online replicas
    [PinotHelixResourceManager] [grizzly-http-server-1] Skipping reset for segment: transportSurgeMetric__2__314__20230116T1249Z of table: transportSurgeMetric_REALTIME on instance: Server_ip-10-109-237-71.ap-southeast-1.compute.internal_8098
    [PinotHelixResourceManager] [grizzly-http-server-1] Skipping reset for segment: transportSurgeMetric__2__314__20230116T1249Z of table: transportSurgeMetric_REALTIME on instance: Server_ip-10-109-228-95.ap-southeast-1.compute.internal_8098
    pinot-admin.sh[29578]: Failed to find servers hosting segment: transportSurgeMetric__2__314__20230116T1249Z for table: transportSurgeMetric_REALTIME (all ONLINE/CONSUMING instances: [] are disabled, but find enabled OFFLINE instance: Server_ip-10-109-228-95.ap-southeast-1.compute.internal_8098 from OFFLINE instances: [Server_ip-10-109-228-95.ap-southeast-1.compute.internal_8098, Server_ip-10-109-237-71.ap-southeast-1.compute.internal_8098], not counting the segment as unavailable)
    [PinotHelixResourceManager] [grizzly-http-server-1] Reset partitions [transportSurgeMetric__2__314__20230116T1249Z] for resource transportSurgeMetric_REALTIME on instance Server_ip-10-109-237-71.ap-southeast-1.compute.internal_8098 in cluster prd-mimic-pinot.
    s
    • 2
    • 2
  • a

    Ashwin Raja

    01/19/2023, 4:09 AM
    howdy! we just upgraded our docker images from
    0.11.0
    to
    0.13.0-SNAPSHOT-21c6532564-20230119
    now all of our minions (which are doing SegmentGenerationAndPush) are failing with this stack trace:
    Copy code
    Caused by: java.net.URISyntaxException: Expected scheme-specific part at index 3: s3:
    38
    	at java.net.URI$Parser.fail(URI.java:2913) ~[?:?]
    37
    	at java.net.URI$Parser.failExpecting(URI.java:2919) ~[?:?]
    36
    	at java.net.URI$Parser.parse(URI.java:3119) ~[?:?]
    35
    	at java.net.URI.<init>(URI.java:685) ~[?:?]
    34
    	at java.net.URI.<init>(URI.java:786) ~[?:?]
    33
    	at org.apache.pinot.plugin.filesystem.S3PinotFS.getBase(S3PinotFS.java:235) ~[pinot-s3-0.13.0-SNAPSHOT-shaded.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    32
    	at org.apache.pinot.plugin.filesystem.S3PinotFS.normalizeToDirectoryPrefix(S3PinotFS.java:205) ~[pinot-s3-0.13.0-SNAPSHOT-shaded.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    31
    	at org.apache.pinot.plugin.filesystem.S3PinotFS.isDirectory(S3PinotFS.java:565) ~[pinot-s3-0.13.0-SNAPSHOT-shaded.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    30
    	at org.apache.pinot.plugin.filesystem.S3PinotFS.exists(S3PinotFS.java:437) ~[pinot-s3-0.13.0-SNAPSHOT-shaded.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    29
    	at org.apache.pinot.segment.local.utils.SegmentPushUtils.sendSegmentUriAndMetadata(SegmentPushUtils.java:295) ~[pinot-all-0.13.0-SNAPSHOT-jar-with-dependencies.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    28
    	at org.apache.pinot.segment.local.utils.SegmentPushUtils.sendSegmentUriAndMetadata(SegmentPushUtils.java:139) ~[pinot-all-0.13.0-SNAPSHOT-jar-with-dependencies.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    27
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.pushSegment(SegmentGenerationAndPushTaskExecutor.java:205) ~[pinot-all-0.13.0-SNAPSHOT-jar-with-dependencies.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    26
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.generateAndPushSegment(SegmentGenerationAndPushTaskExecutor.java:156) ~[pinot-all-0.13.0-SNAPSHOT-jar-with-dependencies.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    25
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.executeTask(SegmentGenerationAndPushTaskExecutor.java:126) ~[pinot-all-0.13.0-SNAPSHOT-jar-with-dependencies.jar:0.13.0-SNAPSHOT-21c6532564cb0fea682cf362a4d4eec37ef831e4]
    24
    	... 9 more
    more details on our config in thread
    x
    • 2
    • 42
  • h

    Hans Brandl

    01/19/2023, 9:54 AM
    Hi out there I have a Kafka-Pinot configuration over which I get two tables. It is a fact table and a lookup table. According to the documentation it is only possible to use the lookup table as offline table. Is there a way to still join two realtime tables, or alternatively turn a realtime table into an offline table? Thanks in advance
  • r

    Rohit Anilkumar

    01/19/2023, 11:26 AM
    Im getting this error when Im starting the controller. Not sure what went wrong. @Xiang Fu
    Copy code
    INFO: [HttpServer] Started.
    11:25:01.818 [grizzly-http-server-1] ERROR org.apache.pinot.controller.api.resources.WebApplicationExceptionMapper - Server error:
    java.lang.NullPointerException: null
            at org.apache.pinot.controller.api.resources.PinotTenantRestletResource.getTablesServedFromTenant(PinotTenantRestletResource.java:249) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.apache.pinot.controller.api.resources.PinotTenantRestletResource.getTablesOnTenant(PinotTenantRestletResource.java:240) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
            at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
            at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
            at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
            at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:684) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:356) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-58c1dc9b7d8e28ac7207fa7565b1ed76f7a0c2ad]
            at java.lang.Thread.run(Thread.java:829) [?:?]
    • 1
    • 1
  • a

    Amit Kumar

    01/19/2023, 10:30 PM
    👋 Hello, team! I am trying to setup Pinot cluster on GKE but i am getting below errors after creating a realtime table. My realtime table is also appearing in Bad status on controller dashboard. Can i get some pointers to debug and solve it further?
    Copy code
    2023/01/19 22:20:14.245 INFO [StartServiceManagerCommand] [main] Started Pinot [CONTROLLER] instance [Controller_apache-pinot-controller-0.apache-pinot-controller.platform-goals.svc.protect.dev_9000] at 15.812s since launch
    2023/01/19 22:20:45.592 ERROR [ZNRecordSerializer] [grizzly-http-server-0] Exception during deserialization of bytes: ""
    org.apache.pinot.shaded.com.fasterxml.jackson.databind.exc.InvalidFormatException: Cannot coerce empty String ("") to `org.apache.helix.zookeeper.datamodel.ZNRecord` value (but could if coercion was enabled using `CoercionConfig`)
     at [Source: (ByteArrayInputStream); line: 1, column: 1]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:67) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.DeserializationContext.reportBadCoercion(DeserializationContext.java:1666) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.std.StdDeserializer._checkCoercionFail(StdDeserializer.java:1432) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.std.StdDeserializer._deserializeFromEmptyString(StdDeserializer.java:325) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.std.StdDeserializer._deserializeFromString(StdDeserializer.java:270) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromString(BeanDeserializerBase.java:1495) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:207) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:197) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.shaded.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3585) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.zookeeper.datamodel.serializer.ZNRecordSerializer.deserialize(ZNRecordSerializer.java:139) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.zookeeper.datamodel.serializer.ChainedPathZkSerializer.deserialize(ChainedPathZkSerializer.java:93) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.zookeeper.zkclient.ZkClient.deserialize(ZkClient.java:2108) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.zookeeper.zkclient.ZkClient.readData(ZkClient.java:2147) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.zookeeper.zkclient.ZkClient.readData(ZkClient.java:2131) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.helix.manager.zk.ZkBaseDataAccessor.get(ZkBaseDataAccessor.java:495) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.controller.helix.core.PinotHelixResourceManager.readZKData(PinotHelixResourceManager.java:1635) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.apache.pinot.controller.api.resources.ZookeeperResource.getData(ZookeeperResource.java:108) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
    	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    	at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:684) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:356) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.12.0-SNAPSHOT-jar-with-dependencies.jar:0.12.0-SNAPSHOT-54046e1547a139a04d62bdf1c365eb6f0166adad]
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    Copy code
    2023/01/19 22:24:08.349 ERROR [MessageDispatchStage] [HelixController-pipeline-default-pinot-lite-quickstart-(29014488_DEFAULT)] Event 29014488_DEFAULT : Failed to send message: /pinot-lite-quickstart/INSTANCES/Controller_apache-pinot-controller-0.apache-pinot-controller.platform-goals.svc.protect.dev_9000/MESSAGES/3ec9ec9b-501c-4c49-9494-7888ddc63850
    
    2023/01/19 22:24:41.564 ERROR [CompletionServiceHelper] [grizzly-http-server-0] Server: Server_apache-pinot-server-1.apache-pinot-server.platform-goals.svc.protect.dev_8098 returned error: 404
    m
    a
    x
    • 4
    • 17
  • r

    Rohit Anilkumar

    01/20/2023, 4:44 AM
    Hello team, I tried uploading 200GB of data and it was working perfectly fine. But I had to reload the data again. I deleted all the data and re ingested it. But now all the segments are in BAD state. When I look at the segment info, I cant find anything wrong in it. I tried /debug/tables/{tableName} and I cant see any errors there either. Can someone help me out?
    x
    • 2
    • 5
  • s

    Shubham Kumar

    01/20/2023, 8:50 AM
    Hi team, We are integrating our production pinot cluster with tableau. We are using the JDBC connection to do so. while connecting from tableau desktop we are able to do so using this jdbc url :
    jdbc:<pinot://controller-host?brokers=broker-host&scheme=https&useSSL=false>
    . We then create a workbook with pinot table and publish it to the tableau server. Publishing gets completed and when we try to open the workbook on tableau server we get this error :
    Copy code
    'org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder.setSslContext(org.apache.pinot.shaded.io.netty.handler.ssl.SslContext)'
    Generic JDBC connection error
    'org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder.setSslContext(org.apache.pinot.shaded.io.netty.handler.ssl.SslContext)'
    can somebody please help with this issue. tableau logs stacktrace :
    Copy code
    ERROR com.tableau.connect.util.GrpcServiceHelper - Failed in constructProtocol.
    java.lang.NoSuchMethodError: 'org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder.setSslContext(org.apache.pinot.shaded.io.netty.handler.ssl.SslContext)'
            at org.apache.pinot.client.JsonAsyncHttpPinotClientTransport.<init>(JsonAsyncHttpPinotClientTransport.java:76) ~[?:?]
            at org.apache.pinot.client.JsonAsyncHttpPinotClientTransportFactory.buildTransport(JsonAsyncHttpPinotClientTransportFactory.java:51) ~[?:?]
            at org.apache.pinot.client.PinotDriver.connect(PinotDriver.java:115) ~[?:?]
            at com.tableausoftware.jdbc.JDBCDriverManager.getConnection(JDBCDriverManager.java:271) ~[jdbcserver.jar:20221.0.26]
            at com.tableausoftware.jdbc.JDBCProtocolImpl.getConnection(JDBCProtocolImpl.java:325) ~[jdbcserver.jar:20221.0.26]
            at com.tableausoftware.jdbc.JDBCProtocolImpl.<init>(JDBCProtocolImpl.java:118) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.service.ProtocolPool.constructProtocol(ProtocolPool.java:48) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.service.ProtocolService.constructProtocol(ProtocolService.java:59) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.grpc.GrpcProtocolService.lambda$constructProtocol$0(GrpcProtocolService.java:63) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.grpc.GrpcProtocolService.wrap(GrpcProtocolService.java:289) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.grpc.GrpcProtocolService.constructProtocol(GrpcProtocolService.java:62) ~[jdbcserver.jar:20221.0.26]
            at com.tableau.connect.generated.ProtocolServiceGrpc$MethodHandlers.invoke(ProtocolServiceGrpc.java:1492) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:180) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:814) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[jdbcserver.jar:20221.0.26]
            at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) ~[jdbcserver.jar:20221.0.26]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
            at java.lang.Thread.run(Unknown Source) [?:?]
    m
    k
    • 3
    • 9
  • d

    Dhwanil Ditani

    01/20/2023, 10:27 AM
    Hi Team, Just have a quick question. What are the possible reasons for segments going bad in a realtime table?
    m
    n
    • 3
    • 9
  • r

    Rajan Garg

    01/20/2023, 10:57 AM
    I am getting this error after building pinot docker image and running it:
    Copy code
    All illegal access operations will be denied in a future release
    2023/01/19 12:25:41.149 INFO [PinotAdministrator] [main] Usage: pinot-admin.sh <subCommand>
    Here is the dockerFile link: https://github.com/apache/pinot/blob/master/docker/images/pinot/Dockerfile This error is because of line 66. can somebody please help me with this?
  • s

    suraj sheshadri

    01/22/2023, 4:56 AM
    Hello, If we want to do a less than comparison to a number like the below query. Can you please suggest how to do it. Currently it doesnt allow. If the pinot table column is a json struct Array and we need to filter anything less than 112 as below what is the suggested way. JSON_MATCH is not allowing comparative operators. Only equal to symbol is supported as seen in snapshot. Person column has json Array data like : [{“element”{“number”22,“name”“YYY”}},{“element”{“number”122,“name”“XXX”}}]
    Copy code
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$[*].element.name=''XXX'' AND "$[*].element.number" <= 112').
    m
    j
    • 3
    • 20
  • r

    robert zych

    01/22/2023, 2:52 PM
    I'm trying to help another user validate if a consistency issue with upsert he's experiencing in an older version has been resolved in the latest version on master, but in order to reproduce the issue I need to run 2 servers. The problem I'm having is that server-2 isn't consuming from the topic after I increase replicasPerPartition to 2 and producing new records to the topic. I'm running the Controller, Broker, and Servers locally via IntelliJ.
    • 1
    • 1
  • t

    Tommaso Peresson

    01/23/2023, 3:23 PM
    Hello, I have a question, is it possible to have a custom mapping for json ingestion? I’ll try to be more specific. Is it possible to ingest a key named
    a
    from a json in a column named
    b
    of a compatible type?
    Copy code
    from:
    
    {"a": "value"}
    
    to: 
    
    |   b   |
    ---------
    | value |
    m
    • 2
    • 3
  • a

    arun udaiyar

    01/23/2023, 5:47 PM
    Hi All, need a help. i have deployed the pinot helm chart on GKE cluster. but while accessing the GCS bucket i am getting the below error,
    Copy code
    Could not instantiate file system for class org.apache.pinot.plugin.filesystem.GCSPinotFS with scheme gcs
    java.lang.ClassNotFoundException: org.apache.pinot.plugin.filesystem.GCSPinotFS
            at java.net.URLClassLoader.findClass(URLClassLoader.java:476) ~[?:?]
            at java.lang.ClassLoader.loadClass(ClassLoader.java:589) ~[?:?]
            at org.apache.pinot.spi.plugin.PluginClassLoader.loadClass(PluginClassLoader.java:104) ~[pinot-all-0.11.0-jar-with-dependencies.jar:0.11.0-1b4d6b6b0a27422c1552ea1a936ad145056f7033]
    can someone verify my below configs for gcs bucket,
    Copy code
    dir: <gcs://pinot-workload-bucket/pinot-rokumesh-staging/controller-data>
          pinot.controller.storage.factory.class.gcs=org.apache.pinot.plugin.filesystem.GCSPinotFS
          pinot.controller.storage.factory.gcs.region=us-east-1
          pinot.controller.segment.fetcher.protocols=file,http,gcs
          pinot.controller.storage.factory.gcs.disableAcl=false
          pinot.controller.segment.fetcher.gcs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
          pinot.server.storage.factory.class.gcs=org.apache.pinot.plugin.filesystem.GCSPinotFS
          pinot.server.storage.factory.gcs.region=us-east-1
          pinot.server.segment.fetcher.protocols=file,http,gcs
          pinot.controller.storage.factory.gcs.disableAcl=false
          pinot.server.segment.fetcher.gcs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
          pinot.server.segment.fetcher.gcs.retry.count=10
          pinot.server.instance.segment.store.uri=<gcs://pinot-workload-bucket/pinot-rokumesh-staging/controller-data>
    x
    • 2
    • 2
  • g

    GerardJ

    01/24/2023, 3:41 PM
    I am trying to ingest data from a Pulsar AVRO-encoded topic into Pinot. Following the example given here (https://dev.startree.ai/docs/pinot/recipes/pulsar), that uses a JSON-encoded topic, I can successfully query data using the Pinot Data Explorer. I changed the example table specification to use AVRO encoding by replacing
    Copy code
    "stream.pulsar.decoder.class.name": "org.apache.pinot.plugin.inputformat.json.JSONMessageDecoder"
    with
    Copy code
    "stream.pulsar.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.SimpleAvroMessageDecoder"
    I also changed the Python code to use AVRO encoding:
    Copy code
    import pulsar
    import json
    import time
    import random
    import uuid
    from pulsar.schema import *
    from pulsar.schema import AvroSchema
    
    class message(Record):
        ts = Integer(required=True)
        uuid = String(required=True)
        count = Integer(required=True)
    message_schema = AvroSchema(message)
    
    client = pulsar.Client('<pulsar://localhost:6650>')
    producer = client.create_producer(topic='eventsavro', schema=message_schema)
    
    while True:
        my_message = message()
        my_message.ts = int(time.time() * 1000.0)
        my_message.uuid = str(uuid.uuid4()).replace("-", "")
        my_message.count = random.randint(0, 1000)
        producer.send(my_message)
    I can confirm that messages are successfully streaming into the AVRO-encoded Pulsar topic. However, when I query for data using the Pinot Data Explorer, no data is retrieved, and I get this error message: "message": "null:\n1 segments unavailable: [eventsavro__0__0__20230124T1351Z]", "errorCode": 305 The Pinot broker log has messages like the following: "No server found for request 1: select * from events limit 10" Any help would be greatly appreciated!
    n
    • 2
    • 8
  • a

    austin macciola

    01/24/2023, 8:50 PM
    Hello Everyone 👋 I have an issue with my Pinot instance • We have 1 REALTIME table that i can post the schema for here if needed • It was running for a while just fine • Then i think we ran into JVM HEAP memory limits. • So we increased them but since we have done that i can never get my segments for this table to ◦ load ◦ return data ◦ get our of the
    bad
    error state • even if i delete the table and re create it from scratch. i can provide any screen shots if anyone is willing to help me debug
    • 1
    • 2
  • p

    Padma Malladi

    01/24/2023, 9:15 PM
    Hi All, we are facing a situation where we are expected to use tls for connections and hence enabled tls on broker. But there seems to be no way of providing the https protcol for broker ip in the connectionfactory api
    m
    • 2
    • 1
  • m

    Michael Roman Wengle

    01/25/2023, 7:32 AM
    Pinot API calls for ZK ExternalState are not always up-to-date. Sometimes there is a mismatch between the controller and the ExternalState. Segment information might be missing in the ExternalState, or the state does not match (e.g. 'Online' but the segment is in 'Error' state). The command below shows both segments up, but in reality, the segment is down on 'Server_ip-x-x-x-137.ap-southeast-1.compute.internal_8098':
    Copy code
    $ curl -s -X GET "<http://ct.stg-pinot.coban.stg-myteksi.com:9000/tables/abcMetric_REALTIME/idealstate?tableType=REALTIME>" -H "accept: application/json" -H "..." | jq | grep -A3 abcMetric_03307_20230116T0833Z
      "abcMetric_03307_20230116T0833Z":
    { "Server_ip-y-y-y-38.ap-southeast-1.compute.internal_8098": "ONLINE", "Server_ip-x-x-x-137.ap-southeast-1.compute.internal_8098": "ONLINE" }
    Does anyone have an idea of what might cause the discrepancy?
    k
    • 2
    • 3
  • g

    GerardJ

    01/25/2023, 3:07 PM
    I am trying to ingest data from an non-persistent Apache Pulsar topic into Pinot. Queries return no data and the Server log includes messages like this: "Failed getLastMessageId command". If I use the Pulsar API to get the last message ID, I get this reply: "code: 405 reason: GetLastMessageId on a non-persistent topic is not allowed", which suggests that calling GetLastMessageId on a non-persistent topic is illegal. Is it possible to configure Pinot ingestion from a non-persistent Apache Pulsar topic? If so, how? Thanks for your help.
  • e

    Ehsan Irshad

    01/26/2023, 9:05 AM
    Hi Team Do we delete the older versions intentionally from downloads.apache.org ? we only want to install
    0.11.0
    version but seems it gone? Does this mean the version is not supported? cc: @Lee Wei Hern Jason
    m
    x
    • 3
    • 5
  • d

    Dhar Rawal

    01/26/2023, 8:15 PM
    Does anybody know what is the maximum allowed length for multi-valued string columns? And how it can be changed? We have a situation where large value ingestion leads to buffer out of bounds exception in pinot. The dictionary is indexed properly but it fails in the forward index. So it consumes from kafka, indexes the row and fails with a IndexOutOfBoundsException which makes me think it's probably more a bug than a config I found a limit with the max string value length that we can customize. But nothing for multi-valued columns
    Copy code
    java.lang.IndexOutOfBoundsException: null
            at java.nio.Buffer.checkIndex(Buffer.java:693) ~[?:?]
            at java.nio.DirectByteBuffer.putInt(DirectByteBuffer.java:791) ~[?:?]
            at org.apache.pinot.segment.spi.memory.PinotByteBuffer.putInt(PinotByteBuffer.java:148) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.segment.local.io.writer.impl.FixedByteSingleValueMultiColWriter.setInt(FixedByteSingleValueMultiColWriter.java:85) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.segment.local.realtime.impl.forward.FixedByteMVMutableForwardIndex.setDictIdMV(FixedByteMVMutableForwardIndex.java:377) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.segment.local.indexsegment.mutable.MutableSegmentImpl.addNewRow(MutableSegmentImpl.java:774) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.segment.local.indexsegment.mutable.MutableSegmentImpl.index(MutableSegmentImpl.java:513) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:581) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:434) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:629) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
            at java.lang.Thread.run(Thread.java:829) [?:?]
  • j

    Jackie

    01/27/2023, 3:52 AM
    The forward index has a 2GB limit. For MV column if your total number of values * numBitsPerValue (after dictionary encoding) is larger than 2GB, it could cause int overflow
  • j

    Jackie

    01/27/2023, 3:52 AM
    Workaround is to reduce the docs in the segment
  • e

    Ehsan Irshad

    01/27/2023, 7:27 AM
    Hi. Team Is below config correct? Scala function should work at ingestion? Table Config
    Copy code
    "transformConfigs": [
            {
              "columnName": "ingestTime",
              "transformFunction": "now()"
            },
    Schema
    Copy code
    {
      "name": "ingestTime",
      "dataType": "TIMESTAMP",
      "format": "1:MILLISECONDS:TIMESTAMP",
      "granularity": "1:NANOSECONDS"
    }
    ✔️ 1
    m
    • 2
    • 19
1...697071...166Latest