https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • p

    Pratik Bhadane

    06/17/2025, 12:42 PM
    Hello Team, We are currently using Apache Pinot on AWS EKS and are in the process of deploying a multi-tenant setup. As part of this, we’ve added 2 servers and 2 brokers, and tagged them appropriately to reflect a new tenant. We were able to successfully: 1. Create a Pinot table assigned to the new tenant 2. See all table segments in GOOD status 3. View the new tenant's brokers and servers correctly listed in the Pinot Web UI after tagging tenent. However, we’re encountering an issue while querying the table. The query fails with the following error: {"requestId":"33806233000000000","brokerId":"Broker_pinot-sr-broker-0.pinot-sr-broker.pinot.svc.cluster.local_8099","exceptions":[{"errorCode":410,"message":"BrokerResourceMissingError"}],"numServersQueried":0,"numServersResponded":0,"numSegmentsQueried":0,"numSegmentsProcessed":0,"numSegmentsMatched":0,"numConsumingSegmentsQueried":0,"numConsumingSegmentsProcessed":0,"numConsumingSegmentsMatched":0,"numDocsScanned":0,"numEntriesScannedInFilter":0...} In Controller UI we are getting below error message: Error Code: 450 InternalError: java.net.UnknownHostException: pinot-sr-broker-1.pinot-sr-broker.pinot.svc.cluster.local at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229) at java.base/java.net.Socket.connect(Socket.java:609) at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:182) Attaching the deployment files used:
    deployment-broker-sr.yamldeployment-server-sr.yaml
    m
    • 2
    • 4
  • r

    Rajat

    06/19/2025, 8:05 AM
    Hi team, this size that is shown here is the size of each replica? or it is the actual size of data?
    n
    m
    • 3
    • 11
  • f

    francoisa

    06/23/2025, 9:19 AM
    Hi 😉 I’m still on an old pinot version 0.12 (migration is planned but I need a bit more robustness before) First things to look at is S3 as deepstore I’ve folowed the doc here https://docs.pinot.apache.org/release-0.12.0/users/tutorials/use-s3-as-deep-store-for-pinot From Swagger controller seems able to download the segment when I hit th download API but on server side lots of Failed to download segment absencesreport__1__5__20240527T2122Z from deep store: Download segment absencesreport__1__5__20240527T2122Z from deepstore uri s3://bucketName/segments/absencesreport_REALTIME/absencesreport__1__5__20240527T2122Z failed. Caught exception in state transition from OFFLINE -> ONLINE for resource: absencesreport_REALTIME, partition: absencesreport__1__5__20240527T2122Z Any ideas ?
    Copy code
    getting logs like software.amazon.awssdk.services.s3.model.S3Exception: The authorization header is malformed; the region is wrong; expecting 'eu-west-1'. (Service: S3, Status Code: 400, Request ID: 184BA1F70B628FA6, Extended Request ID: 82b9e6b1548ad0837abe6ff674d1d3e982a2038442a1059f595d95962627f827)
    here is my server conf for the S3 part
    Copy code
    # Pinot Server Data Directory
    pinot.server.instance.dataDir=/var/lib/pinot_data/server/index
    # Pinot Server Temporary Segment Tar Directory
    pinot.server.instance.segmentTarDir=/var/lib/pinot_data/server/segmentTar
    #S3
    pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
    pinot.server.storage.factory.s3.region=us-west-1
    pinot.server.segment.fetcher.protocols=file,http,s3
    pinot.server.storage.factory.s3.bucket.name=bucketName
    pinot.server.storage.factory.s3.endpoint=URL_OF_MY_S3_ENDOINT
    pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
    pinot.server.segment.fetcher.s3.pathStyleAccess=true
    Any ideas welcome 🙂
    a
    • 2
    • 4
  • k

    Kiril Kalchev

    06/23/2025, 11:37 AM
    I have a highly aggregated real-time table that I’m using to query and chart statistics. Although I’ve added around 10 billion events, they’re aggregated(upserted) into about 500,000 rows. Despite this, the table currently takes up around 200 GB of storage. However, if I export the entire table using
    SELECT * FROM table
    and then re-import it using a simple tool, the size drops to just 15 MB. I only need the aggregated data — I don’t need per-event details. Is there a way to merge the old segments and significantly reduce table size and improve query speed using Pinot tasks?
    m
    a
    t
    • 4
    • 17
  • y

    Yeshwanth

    06/24/2025, 9:13 AM
    Hi Team, I'm on pinot 1.3 and trying out the multi topic ingestion in a single pinot table. I've configured my table as shown below
    Copy code
    "streamIngestionConfig": {
            "streamConfigMaps": [
              {
                "streamType": "kafka",
                "stream.kafka.topic.name": "flattened_spans2",
                "stream.kafka.broker.list": "kafka:9092",
                "stream.kafka.consumer.type": "lowlevel",
                "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
                "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
                "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
                "realtime.segment.flush.threshold.rows": "0",
                "realtime.segment.flush.threshold.time": "30m",
                "realtime.segment.flush.threshold.segment.size": "300M"
              },
              {
                "streamType": "kafka",
                "stream.kafka.topic.name": "flattened_spans3",
                "stream.kafka.broker.list": "kafka.pinot-0-nfr-setup.svc.cluster.local:9092",
                "stream.kafka.consumer.type": "lowlevel",
                "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
                "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
                "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
                "realtime.segment.flush.threshold.rows": "0",
                "realtime.segment.flush.threshold.time": "30m",
                "realtime.segment.flush.threshold.segment.size": "300M"
              }
            ]
          }
    But i am running into this issue.
    Copy code
    First kafka
    2025/06/20 13:21:17.528 INFO [KafkaConsumer] [otel_spans__1__0__20250620T1321Z] [Consumer clientId=otel_spans_REALTIME-flattened_spans2-1, groupId=null] Seeking to offset 0 for partition flattened_spans2-1
    
    Second kafka
    025/06/20 13:22:08.659 INFO [KafkaConsumer] [otel_spans__10001__0__20250620T1321Z] [Consumer clientId=otel_spans_REALTIME-flattened_spans3-1, groupId=null] Seeking to offset 0 for partition flattened_spans3-10001
    2025/06/20 13:22:08.659 INFO [KafkaConsumer] [otel_spans__10000__0__20250620T1321Z] [Consumer clientId=otel_spans_REALTIME-flattened_spans3-0, groupId=null] Seeking to offset 0 for partition flattened_spans3-10000
    the flattened_spans3 has only partitions 1-3 but the pinot server is seeking out partition number 10000 for some reason. Can someone please guide me on where i'm going wrong with my config ?
    • 1
    • 1
  • b

    baarath

    06/25/2025, 7:15 AM
    Hi Team Pinot server went down when checked it failed with following error in screenshot. Is it because of memory issue ? Will i loss data if i restart the server with following command ?
    Copy code
    bin/pinot-admin.sh StartServer -configFileName conf/pinot-server.conf
    x
    b
    y
    • 4
    • 8
  • a

    Aman Satya

    06/25/2025, 8:54 AM
    Hi team, I'm trying to run a
    MergeRollupTask
    on the
    sales_OFFLINE
    table, but it fails with a
    StringIndexOutOfBoundsException
    . It looks like the error comes from this line:
    MergeRollupTaskUtils.getLevelToConfigMap()
    Here, is the config that I am using.
    Copy code
    json
    
    j
    "taskTypeConfigsMap": {
      "MergeRollupTask": {
        "mergeType": "rollup",
        "bucketTimePeriod": "1d",
        "bufferTimePeriod": "3d",
        "revenue.aggregationType": "sum",
        "quantity.aggregationType": "sum"
      }
    }
    And here's the relevant part of the error:
    Copy code
    java.lang.StringIndexOutOfBoundsException: begin 0, end -1, length 9
    at ...MergeRollupTaskUtils.getLevelToConfigMap(MergeRollupTaskUtils.java:64)
    b
    m
    • 3
    • 5
  • m

    mathew

    06/26/2025, 8:15 AM
    Hi Team, DOES PINOT SUPPORTS ADLS Gen 2 (wasbs) or it only supports abfss I am writing parquet files to the azure container, using wasbs method. Then i use this ingestionconfig to ingest it to pinot, using minions: "ingestionConfig": { "batchIngestionConfig": { "segmentIngestionType": "APPEND", "segmentIngestionFrequency": "DAILY", "consistentDataPush": False, "batchConfigMaps": [ { "input.fs.className": "org.apache.pinot.plugin.filesystem.ADLSGen2PinotFS", "input.fs.prop.authenticationType": "ACCESS_KEY", "input.fs.prop.accountName": "wzanalyticsdatastoreprod", "input.fs.prop.accessKey": "xxxxxxxxxxx", "input.fs.prop.fileSystemName": tenant_id, "inputDirURI": f"wasbs://{tenant_id}@wzanalyticsdatastoreprod.blob.core.windows.net/pinot", "includeFileNamePattern": "glob:**/*.parquet", "excludeFileNamePattern": "glob:**/*.tmp", "inputFormat": "parquet" } ] } But I think pinot is not able to look into specified BLOB. I cant use abfss in this container cause does not support BlobStorageEvents or SoftDelete In my DEV container, i was writing the parquet in abfss methos, it is still working.. Is something wrong in my ingestionConfig, using wasbs. can someone pls help!!
    m
    • 2
    • 5
  • j

    Jan

    06/26/2025, 10:34 AM
    Hi team, I'm trying to download segments from one Pinot table and use them in a different table that has the same schema but a different retention configuration. Currently, I'm encountering an issue where the metadata doesn't match because the tables have different names.
    m
    • 2
    • 4
  • i

    Isaac Ñuflo

    06/30/2025, 2:58 PM
    Hi team, first time here. I have an issue when trying to update a table via API. The update is not being applied.
    g
    m
    l
    • 4
    • 13
  • l

    Luis Pessoa

    06/30/2025, 7:11 PM
    hi guys.. has anyone faced this recurring message on your logs? We are seeing this for some time in our pre prod envs despite following the configuration settings as described in the documentation
    Copy code
    The configuration 'stream.kafka.isolation.level' was supplied but isn't a known config.
    h
    • 2
    • 1
  • i

    Idlan Amran

    07/01/2025, 8:46 AM
    https://docs.pinot.apache.org/manage-data/data-import/batch-ingestion/dim-table im looking into Dimensions table currently and noticed that it requires
    primaryKeyColumns
    , based on my experience on upsert table, primary key metadata TTL was stored on heap and as number of records and primary keys grows, memory/RAM usage will increase too. do i need to expect this kind of situation too on dimension table? and does dimension table can be a realtime table rather than offline so i can push the data through kafka ? our app architecture is kinda complex right now. we need a table to stores product activity logs, kind of product tracking such as example
    stock increase
    ,
    price increment
    and etc. and in some cases there are ingestion that was duplicated like in the same day it will be pushed more than 1 time to kafka, causing duplicate. by right we do not need full product data, we just need the changes like the example i shared and what are the id of the changes so we can check historically for this particular product what was changed. i tested using upsert since its the most near to my use case but the memory usage was very huge and our pinot ec2 server was going downtime from time to time bcs of upsert table due to out of memory error. i really appreciate if any of you guys can share any config that you guys work on / whatever i can do to tune my config / improve our ingestion to pinot
    b
    • 2
    • 2
  • v

    Vipin Rohilla

    07/02/2025, 8:13 AM
    Hi all, I have ran into an issue with pinot minion with kerberized hdfs where pinot minion upsert tasks fails with below error:
    Copy code
    UI:
    org.apache.pinot.spi.utils.retry.AttemptsExceededException: Operation failed after 3 attempts
    	at org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:65)
    	at org.apache.pinot.common.utils.fetcher.BaseSegmentFetcher.fetchSegmentToLocal(BaseSegmentFetcher.java:74)
    	at org.apache.pinot.common.utils.fetcher.SegmentFetcherFactory.fetchSegmentToLocal(SegmentFetcherFactory.java:124)
    	at org.apache.pinot.common.utils.fetcher.SegmentFetcherFactory.fetchSegmentToLocal(SegmentFetcherFactory.java:132)
    	at org.apache.pinot.common.utils.fetcher.SegmentFetcherFactory.fetchAndDecryptSegmentToLocal(SegmentFetcherFactory.java:165)
    	at org.apache.pinot.plugin.minion.tasks.BaseTaskExecutor.downloadSegmentToLocal(BaseTaskExecutor.java:121)
    	at org.apache.pinot.plugin.minion.tasks.BaseSingleSegmentConversionExecutor.executeTask(BaseSingleSegmentConversionExecutor.java:105)
    	at org.apache.pinot.plugin.minion.tasks.BaseSingleSegmentConversionExecutor.executeTask(BaseSingleSegmentConv
    
    Minion log shows:
    2025/07/02 13:23:21.309 WARN [PinotFSSegmentFetcher] [TaskStateModelFactory-task_thread-3] Caught exception while fetching segment from: <hdfs://xxxxxxx/controller_data/xxxxxxx/xxxxxxx__4__648__20250528T1621Z> to: /tmp/PinotMinion/data/UpsertCompactionTask/tmp-9727a6d3-cc2d-44d0-9666-34939abbc356/tarredSegment
    org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
            at jdk.internal.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) ~[?:?]
            at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
            at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
            at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) ~[pinot-parquet-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) ~[pinot-parquet-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1741) ~[pinot-orc-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1829) ~[pinot-orc-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1826) ~[pinot-orc-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[pinot-parquet-1.2.0-shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
            at org.apache.hadoop.hdfs.DistributedFil
    Pinot Server, Broker, and Controller are all able to read/write segments from HDFS using the configured keytab. I have recently added 3 Pinot Minion on new nodes, configured with the same keytab, principal, and Hadoop config path. However, when the Minion runs tasks like UpsertCompaction, it fails with the above error: Minion runs under pinot user (systemd) kinit is successful, and Kerberos ticket is visible via klist
    pinot.minion.segment.fetcher.hdfs.hadoop.kerberos.principal=xxxxxx@xxxxxx
    pinot.minion.segment.fetcher.hdfs.hadoop.kerberos.keytab=/etc/security/keytabs/pinot.keytab
    pinot.minion.storage.factory.hdfs.hadoop.conf.path=/usr/hdp/xxxxx/hadoop/conf
    Is there anything else Pinot Minion needs to perform Kerberos login internally? Does it require JAAS config explicitly even with keytab/principal settings?
  • r

    Rajat

    07/08/2025, 5:57 AM
    Hi team, in one of my cluster, the ingestion rate decreases after some time of rebalancing and it comes to zero after a few time. what could be the reason? I am running realtime table only but this bug is happening a lot of time
    m
    • 2
    • 13
  • a

    Adil Shaikh

    07/08/2025, 6:08 AM
    Hi Team, Is there any way to find duplicate segments
    m
    x
    • 3
    • 3
  • e

    Etisha jain

    07/08/2025, 9:35 AM
    Hi all, Can anyone help me, with this error Not able to query table in pinot
    Copy code
    Error Code: 305
    null:
    18 segments unavailable, sampling 10: [iptv_events_test__8__0__20250708T0323Z, iptv_events_test__10__0__20250708T0323Z, iptv_events_test__11__0__20250708T0323Z, iptv_events_test__9__0__20250708T0323Z, iptv_events_test__6__0__20250708T0323Z, iptv_events_test__7__0__20250708T0323Z, iptv_events_test__0__0__20250708T0323Z, iptv_events_test__5__0__20250708T0323Z, iptv_events_test__1__0__20250708T0323Z, iptv_events_test__16__0__20250708T0323Z]N
  • e

    Etisha jain

    07/08/2025, 9:35 AM
    Screenshot 2025-07-08 at 13.27.22.png
  • e

    Etisha jain

    07/08/2025, 9:35 AM
    Let me know if we can connect on this
  • c

    charlie

    07/08/2025, 11:33 PM
    Hi pinot folks! I have a question related to
    SegmentTarPush
    data ingestion jobs. I have segments stored in GCS that I want to upload to my offline table. There was an issue with my table config early on that prevented my
    RealtimeToOfflineSegmentsTask
    from working properly. Now the segments have been removed from my realtime table (retention date passed), but haven't been uploaded to the offline table. Because I have a GCS deep store configured, those segments that were removed from my realtime table still exist in GCS. So I want to run the
    pinot-admin.sh
    LaunchDataIngestionJob
    command to upload those segments to my offline table. I have written the following job spec:
    Copy code
    executionFrameworkSpec:
      name: 'standalone'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
      segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner'
    jobType: SegmentTarPush
    inputDirURI: 'gs:://my_bucket/my_table_folder'
    outputDirURI: 'gs:://my_bucket/my_table_folder'
    overwriteOutput: true
    pinotFSSpecs:
      - scheme: gs
        className: org.apache.pinot.plugin.filesystem.GcsPinotFS
    tableSpec:
      tableName: 'my_table_OFFLINE'
    pinotClusterSpecs:
      - controllerURI: '<http://my_controller_url.com>'
    pushJobSpec:
      pushAttempts: 2
      pushRetryIntervalMillis: 1000
    Am I using the right job type for what I'm trying to achieve? Is what I'm trying to do possible?
    m
    • 2
    • 15
  • v

    Vipin Rohilla

    07/10/2025, 8:29 AM
    Hi Team, We have a 15TB realtime table with upserts enabled, running on 32 Pinot servers. Each server has 10 disks (total 35TB storage) and 1TB RAM. The table uses native upserts. Whenever we make schema changes (e.g., adding a column), Pinot requires a table reload. During this reload, we see a massive spike in heap usage — the servers fail to start with the usual 120GB heap, and we’re forced to temporarily increase it to 600GB. Once the segments load, heap usage drops back down. Has anyone else encountered this? Any suggestions for managing reloads more sustainably?
    k
    j
    • 3
    • 3
  • r

    Rishika

    07/11/2025, 5:29 PM
    Hello, I'm new to Apache Pinot. I'm trying to run it locally, and ingest data from Kafka topic into Pinot. I tried setting up Schema and TableConfig. But when I run LaunchDataIngestionJob . It fails.
    m
    a
    k
    • 4
    • 16
  • l

    Luis P Fernandes

    07/14/2025, 3:50 PM
    Hi Guys, We are tring to set a cold storage on our pinot cluster backed up by S3. In order to setup hot/cold storage for Pinot, and use S3 for cold storage the configurationsfor server.conf, controller.conf, broker.conf are attached at the end as well as the used schema and table config. We observed that the controler uses the S3 configuration as expected (controller.data.dir=s3://storage/controller) and uses the identified bucket as storage. But the server created a local folder that creates the following path: /s3:/cold-storage/server/tiered_REALTIME If anyone has any comments or any help on how we can fix this issue since we are uable to have the sgments moving to S3. Server: pinot.zk.server=localhost:2191 server.helix.cluster.name=PinotCluster pinot.server.netty.port=18098 pinot.server.netty.host=localhost pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS pinot.server.storage.factory.s3.endpoint=http://localhost:9000 pinot.server.storage.factory.s3.accessKey=minioadmin pinot.server.storage.factory.s3.secretKey=minioadmin pinot.server.storage.factory.s3.region=us-east-1 pinot.server.storage.factory.s3.enableS3A=false pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher pinot.server.segment.fetcher.protocols=file,http,s3 pinot.server.instance.tierConfigs.tierNames=hotTier,coldTier pinot.server.instance.segment.directory.loader=tierBased pinot.server.instance.dataDir=/Shared/pinot_data/server pinot.server.instance.tierConfigs.hotTier.dataDir=s3://hot-storage/server pinot.server.instance.tierConfigs.coldTier.dataDir=s3://cold-storage/server Controller: pinot.zk.server=localhost:2191 controller.helix.cluster.name=PinotCluster controller.port=19000 controller.host=localhost controller.tls.client.auth=false controller.segment.relocator.frequencyPeriod=60s controller.segmentRelocator.initialDelayInSeconds=10 controller.segmentRelocator.enableLocalTierMigration=true controller.enable.split.commit=true pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS pinot.controller.storage.factory.s3.endpoint=http://localhost:9000 pinot.controller.storage.factory.s3.accessKey=minioadmin pinot.controller.storage.factory.s3.secretKey=minioadmin pinot.controller.storage.factory.s3.region=us-east-1 pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher pinot.controller.segment.fetcher.protocols=file,http,s3 controller.data.dir=s3://storage/controller controller.local.temp.dir=/Shared/pinot_data/controller/temp Broker: pinot.zk.server=localhost:2191 broker.helix.cluster.name=PinotCluster broker.helix.port=18099 pinot.broker.hostname=localhost pinot.broker.client.queryPort=18099 table_config:
    Copy code
    {
      "tableName": "tiered",
      "tableType": "REALTIME",
      "segmentsConfig": {
        "minimizeDataMovement": false,
        "timeColumnName": "timestamp",
        "timeType": "MILLISECONDS",
        "replicasPerPartition": "1",
        "schemaName": "tiered",
        "replication": "2"
      },
      "tenants": {
        "broker": "DefaultTenant",
        "server": "DefaultTenant",
        "tagOverrideConfig": {}
      },
      "tableIndexConfig": {
        "autoGeneratedInvertedIndex": false,
        "createInvertedIndexDuringSegmentGeneration": false,
        "loadMode": "MMAP",
        "streamConfigs": {
          "streamType": "kafka",
          "stream.kafka.topic.name": "tiered",
          "stream.kafka.broker.list": "localhost:19092",
          "stream.kafka.consumer.type": "lowlevel",
          "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
          "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
          "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
          "realtime.segment.flush.threshold.rows": "0",
          "realtime.segment.flush.threshold.segment.rows": "0",
          "realtime.segment.flush.threshold.time": "1m",
          "realtime.segment.flush.threshold.segment.size": "100M"
        },
        "enableDefaultStarTree": false,
        "enableDynamicStarTreeCreation": false,
        "aggregateMetrics": false,
        "nullHandlingEnabled": false,
        "columnMajorSegmentBuilderEnabled": true,
        "optimizeDictionary": false,
        "optimizeDictionaryForMetrics": false,
        "optimizeDictionaryType": false,
        "noDictionarySizeRatioThreshold": 0.85,
        "rangeIndexVersion": 2,
        "invertedIndexColumns": [],
        "noDictionaryColumns": [],
        "bloomFilterColumns": [],
        "onHeapDictionaryColumns": [],
        "rangeIndexColumns": [],
        "sortedColumn": [],
        "varLengthDictionaryColumns": []
      },
      "quota": {},
      "query": {},
      "ingestionConfig": {
        "continueOnError": false,
        "rowTimeValueCheck": false,
        "segmentTimeValueCheck": true
      },
      "tierConfigs": [
        {
          "name": "hotTier",
          "segmentSelectorType": "time",
          "segmentAge": "1m",
          "storageType": "pinot_server",
          "serverTag": "DefaultTenant_OFFLINE"
        },
        {
          "name": "coldTier",
          "segmentSelectorType": "time",
          "segmentAge": "10m",
          "storageType": "pinot_server",
          "serverTag": "DefaultTenant_OFFLINE"
        }
      ]
    }
    Table_Schema: { "schemaName": "tiered", "enableColumnBasedNullHandling": true, "dimensionFieldSpecs": [ { "name": "product_name", "dataType": "STRING", "notNull": true } ], "metricFieldSpecs": [ { "name": "price", "dataType": "LONG", "notNull": false } ], "dateTimeFieldSpecs": [ { "name": "timestamp", "dataType": "TIMESTAMP", "format": "1MILLISECONDSEPOCH", "granularity": "1:MILLISECONDS" } ] }
    m
    k
    • 3
    • 5
  • f

    Felipe

    07/16/2025, 9:48 AM
    Hi all, I'm seeing this message in some instances of my servers:
    [PerQueryCPUMemAccountantFactory$PerQueryCPUMemResourceUsageAccountant] [CPUMemThreadAccountant] Heap used bytes 6301800816 exceeds critical level 6184752768
    are there any configuration that I can increase the heap size, or this shouldn't be happening at all??
  • f

    Felipe

    07/16/2025, 9:49 AM
    ah, found it 😄
  • m

    Monika reddy

    07/16/2025, 5:25 PM
    Hello @Mayank @Kishore G I wrote a simple java class to connect Pinot cluster locally, I am able to Get table config, able to call Post APIs pause and resume tables, however while updating the table config using PUT Api getting java.util.concurrent.Timeoutexception Has anyone reported this behaviour? Raised startree ticket
    m
    • 2
    • 11
  • k

    Kiril Kalchev

    07/16/2025, 7:41 PM
    Hello guys, We are using Pinot 1.1 and we are currently investigating an issue that we have for 3rd time for the last 2 weeks. We are getting a lot of these error messages: INFO 2025-07-16T183117.674888458Z [resource.labels.containerName: server] 2025/07/16 183117.672 ERROR [KafkaPartitionLevelConnectionHandler] [auctionsStatsRedis__5__0__20250619T0844Z] Caught exception while creating Kafka consumer, giving up ERROR [RealtimeSegmentDataManager_auctionsNew__6__4__20250614T0632Z] [auctionsNew__6__4__20250614T0632Z] Exception while in work [NetworkClient] [auctionsStatsRedis__1__22__20250716T1027Z] [Consumer clientId=auctionsStatsRedis_REALTIME-auctionsStatsRedis-1, groupId=null] Error connecting to node events-prod-cluster-kafka-0.events-prod-cluster-kafka-brokers.kafka-prod.svc:9092 (id: 0 rack: null) 2025/07/16 183137.710 ERROR [ServerSegmentCompletionProtocolHandler] [customKeys_2025_07_5d6254cd_c8e8_423d_b196_73f016e023cb__7__0__20250625T1419Z] Could not send request http://pinot-prod-controller-1.pinot-prod-controller-headless.pinot.svc.cluster.local:9000/segmentStoppedConsuming?reason=org.apache.pinot.shaded.org.apache.kafka.common.KafkaException&streamPartitionMsgOffset=0&instance=Server_pinot-prod-server-2.pinot-prod-server-headless.pinot.svc.cluster.local_8098&offset=-1&name=customKeys_2025_07_5d6254cd_c8e8_423d_b196_73f016e023cb__7__0__20250625T1419Z And after that some tables are missing segments. Right now all our currently running tables lost all their segments and can't be queried. Do you have any ideas what is going on and why?
    m
    • 2
    • 23
  • y

    Yeshwanth

    07/17/2025, 7:30 AM
    Hey Guys, Seeing this error during pinot server and broker startup
    Copy code
    Error occurred during initialization of VM
    agent library failed to init: instrument
    Error opening zip file or JAR manifest missing : /opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent.jar
    I can see a similar issue was reported here - https://github.com/apache/pinot/issues/16283 I don't think the fix was applied to this tag -> https://hub.docker.com/layers/apachepinot/pinot/1.3.0/images/sha256-27d64d558cd8a90efdf2c15d92dfd713b173120606942fd6faef9b19d20ec2dd Can someone pls look into this ?
    m
    x
    • 3
    • 8
  • r

    Ricardo Machado

    07/17/2025, 3:41 PM
    Hi, We are trying to read a table from Pinot into spark using the pinot-spark-connector (version 1.3.0), and we get an error message when the number of columns to get is large (occurs roughly around 175 - 180 columns for our tests). The error does not happen for different number of columns depending on the table. Caused by: org.apache.pinot.connector.spark.common.HttpStatusCodeException: Got error status code '400' with reason 'Bad Request' Stack trace*:*
    Copy code
    An error occurred while calling o4276.count. : org.apache.pinot.connector.spark.common.PinotException: An error occurred while getting routing table for query, '<REDACTED' at org.apache.pinot.connector.spark.common.PinotClusterClient$.getRoutingTableForQuery(PinotClusterClient.scala:208) at org.apache.pinot.connector.spark.common.PinotClusterClient$.getRoutingTable(PinotClusterClient.scala:153) at org.apache.pinot.connector.spark.v3.datasource.PinotScan.planInputPartitions(PinotScan.scala:57) at org.apache.spark.sql.execution.datasources.v2.BatchScanExec.inputPartitions$lzycompute(BatchScanExec.scala:63) at org.apache.spark.sql.execution.datasources.v2.BatchScanExec.inputPartitions(BatchScanExec.scala:63) at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar(DataSourceV2ScanExecBase.scala:179) at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar$(DataSourceV2ScanExecBase.scala:175) at org.apache.spark.sql.execution.datasources.v2.BatchScanExec.supportsColumnar(BatchScanExec.scala:39) at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:184) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:74) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199) at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:74) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199) at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:74) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199) at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:74) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196) at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199) at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431) at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:74) at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:658) at org.apache.spark.sql.execution.QueryExecution.$anonfun$getSparkPlan$1(QueryExecution.scala:195) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:219) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:277) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:714) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:277) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:901) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:276) at org.apache.spark.sql.execution.QueryExecution.getSparkPlan(QueryExecution.scala:195) at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:187) at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:187) at org.apache.spark.sql.execution.QueryExecution.$anonfun$getExecutedPlan$1(QueryExecution.scala:211) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:219) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:277) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:714) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:277) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:901) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:276) at org.apache.spark.sql.execution.QueryExecution.getExecutedPlan(QueryExecution.scala:208) at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:203) at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:203) at org.apache.spark.sql.execution.QueryExecution.$anonfun$writeProcessedPlans$10(QueryExecution.scala:417) at org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:747) at org.apache.spark.sql.execution.QueryExecution.writeProcessedPlans(QueryExecution.scala:417) at org.apache.spark.sql.execution.QueryExecution.writePlans(QueryExecution.scala:393) at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:432) at <http://org.apache.spark.sql.execution.QueryExecution.org|org.apache.spark.sql.execution.QueryExecution.org>$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:333) at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:311) at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:146) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$10(SQLExecution.scala:220) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:108) at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:384) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$9(SQLExecution.scala:220) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:405) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:219) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:901) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:83) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:74) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4390) at org.apache.spark.sql.Dataset.count(Dataset.scala:3661) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:569) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: org.apache.pinot.connector.spark.common.HttpStatusCodeException: Got error status code '400' with reason 'Bad Request' at org.apache.pinot.connector.spark.common.HttpUtils$.executeRequest(HttpUtils.scala:66) at org.apache.pinot.connector.spark.common.HttpUtils$.sendGetRequest(HttpUtils.scala:50) at org.apache.pinot.connector.spark.common.PinotClusterClient$.$anonfun$getRoutingTableForQuery$1(PinotClusterClient.scala:199) at scala.util.Try$.apply(Try.scala:213) at org.apache.pinot.connector.spark.common.PinotClusterClient$.getRoutingTableForQuery(PinotClusterClient.scala:196)
    m
    • 2
    • 1
  • v

    Victor Bivolaru

    07/18/2025, 1:40 PM
    Hello everyone, I am new here. I've just started a pinot cluster locally - no docker just running the sserver, controller, etc using the scripts inside the bin directory. I am having trouble setting up grafana and prometheus to scrape some metrics off of the cluster. I can find nothing about observability except for the wiki page - caveat is I won't be running it in k8s
    m
    • 2
    • 1
  • k

    Kiril Kalchev

    07/18/2025, 9:10 PM
    Hi everyone. I believe I have an issue in the cluster but I am not sure. I am getting few segments like this in Zookeeper.
    Copy code
    "auctionsStats__6__13__20250703T1233Z": {
          "Server_pinot-prod-server-0.pinot-prod-server-headless.pinot.svc.cluster.local_8098": "OFFLINE",
          "Server_pinot-prod-server-1.pinot-prod-server-headless.pinot.svc.cluster.local_8098": "OFFLINE",
          "Server_pinot-prod-server-2.pinot-prod-server-headless.pinot.svc.cluster.local_8098": "OFFLINE"
        },
    When I try to download the segments again, I get an error saying they are not in my deepstore. However, queries seem to work normally. Is it expected for segments to be reported as offline and missing in deepstore? What exactly does offline mean as a segment status? Bellow are the latest messages for the above segment:
    Copy code
    INFO 2025-07-18T05:35:05.820609035Z [resource.labels.containerName: server] 2025/07/18 05:35:05.820 INFO [HttpClient] [auctionsStats__6__13__20250703T1233Z] Sending request: <http://pinot-prod-controller-1.pinot-prod-controller-headless.pinot.svc.cluster.local:9000/segmentStoppedConsuming?reason=org.apache.pinot.shaded.org.apache.kafka.common.KafkaException&streamPartitionMsgOffset=0&instance=Server_pinot-prod-server-2.pinot-prod-server-headless.pinot.svc.cluster.local_8098&offset=-1&name=auctionsStats__6__13__20250703T1233Z> to controller: pinot-prod-controller-1.pinot-prod-controller-headless.pinot.svc.cluster.local, version: Unknown
    INFO 2025-07-18T05:35:05.821542868Z [resource.labels.containerName: server] 2025/07/18 05:35:05.821 INFO [ServerSegmentCompletionProtocolHandler] [auctionsStats__6__13__20250703T1233Z] Controller response {"status":"PROCESSED","streamPartitionMsgOffset":null,"isSplitCommitType":true,"buildTimeSec":-1} for <http://pinot-prod-controller-1.pinot-prod-controller-headless.pinot.svc.cluster.local:9000/segmentStoppedConsuming?reason=org.apache.pinot.shaded.org.apache.kafka.common.KafkaException&streamPartitionMsgOffset=0&instance=Server_pinot-prod-server-2.pinot-prod-server-headless.pinot.svc.cluster.local_8098&offset=-1&name=auctionsStats__6__13__20250703T1233Z>
    INFO 2025-07-18T05:35:05.821571462Z [resource.labels.containerName: server] 2025/07/18 05:35:05.821 INFO [RealtimeSegmentDataManager_auctionsStats__6__13__20250703T1233Z] [auctionsStats__6__13__20250703T1233Z] Got response {"status":"PROCESSED","streamPartitionMsgOffset":null,"isSplitCommitType":true,"buildTimeSec":-1}
    INFO 2025-07-18T05:35:05.983729827Z [resource.labels.containerName: server] 2025/07/18 05:35:05.976 INFO [local_8098 - SegmentOnlineOfflineStateModel] [HelixTaskExecutor-message_handle_thread_7] SegmentOnlineOfflineStateModel.onBecomeOfflineFromConsuming() : ZnRecord=cc787368-9a93-42f3-8588-ebefe88f2a07, {CREATE_TIMESTAMP=1752816905933, ClusterEventName=IdealStateChange, EXECUTE_START_TIMESTAMP=1752816905976, EXE_SESSION_ID=300627ec087008e, FROM_STATE=CONSUMING, MSG_ID=cc787368-9a93-42f3-8588-ebefe88f2a07, MSG_STATE=read, MSG_TYPE=STATE_TRANSITION, PARTITION_NAME=auctionsStats__6__13__20250703T1233Z, READ_TIMESTAMP=1752816905959, RESOURCE_NAME=auctionsStats_REALTIME, RESOURCE_TAG=auctionsStats_REALTIME, RETRY_COUNT=3, SRC_NAME=pinot-prod-controller-2.pinot-prod-controller-headless.pinot.svc.cluster.local_9000, SRC_SESSION_ID=2006281fc800087, STATE_MODEL_DEF=SegmentOnlineOfflineStateModel, STATE_MODEL_FACTORY_NAME=DEFAULT, TGT_NAME=Server_pinot-prod-server-2.pinot-prod-server-headless.pinot.svc.cluster.local_8098, TGT_SESSION_ID=300627ec087008e, TO_STATE=OFFLINE}{}{}, Stat=Stat {_version=0, _creationTime=1752816905946, _modifiedTime=1752816905946, _ephemeralOwner=0}
    INFO 2025-07-18T05:35:05.984995178Z [resource.labels.containerName: server] 2025/07/18 05:35:05.983 INFO [HelixInstanceDataManager] [HelixTaskExecutor-message_handle_thread_7] Removing segment: auctionsStats__6__13__20250703T1233Z from table: auctionsStats_REALTIME
    INFO 2025-07-18T05:35:05.985038958Z [resource.labels.containerName: server] 2025/07/18 05:35:05.983 INFO [auctionsStats_REALTIME-RealtimeTableDataManager] [HelixTaskExecutor-message_handle_thread_7] Removing segment: auctionsStats__6__13__20250703T1233Z from table: auctionsStats_REALTIME
    INFO 2025-07-18T05:35:05.985045952Z [resource.labels.containerName: server] 2025/07/18 05:35:05.983 INFO [auctionsStats_REALTIME-RealtimeTableDataManager] [HelixTaskExecutor-message_handle_thread_7] Closing segment: auctionsStats__6__13__20250703T1233Z of table: auctionsStats_REALTIME
    INFO 2025-07-18T05:35:05.985110098Z [resource.labels.containerName: server] 2025/07/18 05:35:05.984 INFO [MutableSegmentImpl_auctionsStats__6__13__20250703T1233Z_auctionsStats] [HelixTaskExecutor-message_handle_thread_7] Trying to close RealtimeSegmentImpl : auctionsStats__6__13__20250703T1233Z
    INFO 2025-07-18T05:35:05.985117081Z [resource.labels.containerName: server] 2025/07/18 05:35:05.984 INFO [auctionsStats_REALTIME-6-ConcurrentMapPartitionUpsertMetadataManager] [HelixTaskExecutor-message_handle_thread_7] Skip removing untracked (replaced or empty) segment: auctionsStats__6__13__20250703T1233Z
    INFO 2025-07-18T05:35:05.987557288Z [resource.labels.containerName: server] 2025/07/18 05:35:05.987 INFO [MmapMemoryManager] [HelixTaskExecutor-message_handle_thread_7] Deleted file /var/pinot/server/data/index/auctionsStats_REALTIME/consumers/auctionsStats__6__13__20250703T1233Z.0
    INFO 2025-07-18T05:35:05.990545309Z [resource.labels.containerName: server] 2025/07/18 05:35:05.990 INFO [auctionsStats_REALTIME-RealtimeTableDataManager] [HelixTaskExecutor-message_handle_thread_7] Closed segment: auctionsStats__6__13__20250703T1233Z of table: auctionsStats_REALTIME
    INFO 2025-07-18T05:35:05.990570191Z [resource.labels.containerName: server] 2025/07/18 05:35:05.990 INFO [auctionsStats_REALTIME-RealtimeTableDataManager] [HelixTaskExecutor-message_handle_thread_7] Removed segment: auctionsStats__6__13__20250703T1233Z from table: auctionsStats_REALTIME
    INFO 2025-07-18T05:35:05.990578459Z [resource.labels.containerName: server] 2025/07/18 05:35:05.990 INFO [HelixInstanceDataManager] [HelixTaskExecutor-message_handle_thread_7] Removed segment: auctionsStats__6__13__20250703T1233Z from table: auctionsStats_REALTIME
    INFO 2025-07-18T06:15:57.880369560Z [resource.labels.containerName: controller] 2025/07/18 06:15:57.880 INFO [PinotLLCRealtimeSegmentManager] [pool-10-thread-7] Repairing segment: auctionsStats__6__13__20250703T1233Z which is OFFLINE for all instances in IdealState
    m
    • 2
    • 4