https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • f

    Facundo Bianco

    05/11/2022, 4:11 PM
    Hi All, we're running ingestion and Push Job Spec (doc) is configured like
    Copy code
    pushJobSpec:
      pushParallelism: 20
      pushAttempts: 2
      segmentUriPrefix: "<s3://bucket-foo>"
      segmentUriSuffix : ""
    And got this error message:
    2022/05/11 142850.531 ERROR [BaseTableDataManager] [HelixTaskExecutor-message_handle_thread] Attempts exceeded when downloading segment: foo_OFFLINE_2022-05-03_2022-05-03_11 for table: foo_OFFLINE from: s3://bucket-foo/data/output/foo_OFFLINE_2022-05-03_2022-05-03_11.tar.gznull to: /tmp/pinot-tmp/server/index/foo_OFFLINE/tmp/tmp-foo_OFFLINE_2022-05-03_2022-05-03_11-b2f3a97c-9c14-4b4c-9874-fb028597a237/foo_OFFLINE_2022-05-03_2022-05-03_11.tar.gz ava:72) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-f
    It adds 'null' at end of file's URI. Any idea how to resolve? Thanks in advance.
    m
    x
    • 3
    • 8
  • s

    Saumya Upadhyay

    05/12/2022, 4:00 AM
    We are facing very few times latency issue in pinot, so I tried to see from Grafana and grafana is showing very high latency but this is not the case. Why graph is showing that much latency this is giving wrong impression. Is there anything we need to change in Table Consuming Latency graph,I added the graphs as it is in Grafana as mentioned in monitoring link in pinot documents :
    s
    • 2
    • 4
  • s

    Saumya Upadhyay

    05/12/2022, 6:54 AM
    avg by (table) (pinot_server_freshnessLagMs_50thPercentile{namespace="pinot-qa"}) and same for other percentiles
  • l

    Luy

    05/12/2022, 10:33 AM
    I'm struggling to add Segments to pinot-controller using this command.
    docker exec -it manual-pinot-controller bin/pinot-admin.sh LaunchDataIngestionJob -jobSpecFile /data/docker-job-spec.yml -exec
    But it gives an error.
    Copy code
    2022/05/12 09:51:12.633 ERROR [PinotAdministrator] [main] Exception caught: 
    picocli.CommandLine$UnmatchedArgumentException: Unknown option: '-exec'
    Anyone can help me with this?
    x
    • 2
    • 1
  • s

    Saumya Upadhyay

    05/12/2022, 11:19 AM
    hi, I am struggling to update table config, I have updated schema to add new column and for same column I have written transformationFunction, in existing table it is showing the new column but not copying the value in that new column, with same table config I created new table and all is working fine. I have also reloaded the segments but not working with the existing table, record's json string is ->
    Copy code
    {
      "header": {
        "nnTransId": "9003",
        "qid": 1,
        "timestamp": 1234567890123
      },
      "status": "N200_SUCCESS"
    }
    Copy code
    "ingestionConfig": {
          "transformConfigs": [
            {
              "columnName": "header_js",
              "transformFunction": "jsonFormat(header)"
            },
            {
              "columnName": "header_nnTransId",
              "transformFunction": "JSONPATHSTRING(header, '$.nnTransId')"
            } .....
    m
    k
    • 3
    • 18
  • l

    Luis Fernandez

    05/12/2022, 2:12 PM
    hello my friends it’s me again, does anyone know what would be the reason of zookeeper crashing while we are ingesting data with the job yaml? we are running some migrations and it seems like zookeeper is just keeping on being sad crashing a lot, also what’s your recommended sizing for zookeeper, we are just using the default in the helm chart we may be hitting some roof
    m
    d
    f
    • 4
    • 53
  • a

    Ali Atıl

    05/12/2022, 2:16 PM
    Hello everyone, Is it possible to generate fixed segment names with sequenceId appended to them? I have an input folder with multiple csv files in it. I want to run an ingestion job to import them. I am trying to use backfill data feature to truncate my table. I want to replace those segments with another data set in the future and data doesn't have a time column actually. It seems that segment names are playing a role in replacing segments. That's why i am asking about fixed name plus sequenceId. Thanks in advance 🙏
    m
    • 2
    • 7
  • s

    Stuart Millholland

    05/12/2022, 7:49 PM
    Hi. I'm trying to setup gcs as the data bucket for our pinot controller in our gke dev environments only. I've set things up in the extra.configs in the controller section and I'm getting this error: Local temporary directory is not configured, cannot use remote data directory
    m
    • 2
    • 3
  • d

    Deepak Mishra

    05/13/2022, 3:33 AM
    Hi Team , I am facing error while executing pinot ingestion job Can any please help on this issue?
    m
    k
    • 3
    • 2
  • s

    Saumya Upadhyay

    05/13/2022, 12:37 PM
    one of the realtime table is skipping records, not found any OOM related issues in logs, external view and segment health is also good, only found this line which is different than others and getting issue in same table, is it indicating something fishy or just info:
    Copy code
    Processed requestId=6212,table=tSCalibrationAttempt_REALTIME,segments(queried/processed/matched/consuming)=4/2/0/2,schedulerWaitMs=1,reqDeserMs=0,totalExecMs=0,resSerMs=0,totalTimeMs=1,minConsumingFreshnessMs=1652072636087,broker=Broker_pinot-broker-0.pinot-broker-headless.pinot.svc.cluster.local_8099,numDocsScanned=0,scanInFilter=0,scanPostFilter=0,sched=FCFS,threadCpuTimeNs(total/thread/sysActivity/resSer)=0/0/0/0
    k
    • 2
    • 2
  • m

    Map

    05/13/2022, 7:03 PM
    Hi there, we’ve got a field MSGDATETIME, with values like “2022-05-13T182125.444Z”. Trying to write the schema for it in Pinot as a time colume. However, with the following configuration:
    "dateTimeFieldSpecs": [
    {
    "name": "MSGDATETIME",
    "dataType": "STRING",
    "format": "1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd'T'HH:mm:ss.fff'Z'",
    "granularity": "1:MILLISECONDS"
    }
    ],
    Pinot reports that
    invalid datetime format: 1MILLISECONDSSIMPLE_DATE_FORMATyyyy MM dd’T’HHmm:ss.fff’Z’”
    What would be the right way to write this schema?
    m
    • 2
    • 2
  • s

    Seunghyun

    05/14/2022, 12:13 AM
    Has anyone faced the issue with lz4 compression?
    Copy code
    net.jpountz.lz4.LZ4Exception: Malformed input at 13
            at net.jpountz.lz4.LZ4JavaUnsafeSafeDecompressor.decompress(LZ4JavaUnsafeSafeDecompressor.java:180) ~[lz4-java-1.7.1.jar:?]
            at net.jpountz.lz4.LZ4SafeDecompressor.decompress(LZ4SafeDecompressor.java:145) ~[lz4-java-1.7.1.jar:?]
            at org.apache.pinot.segment.local.io.compression.LZ4Decompressor.decompress(LZ4Decompressor.java:42) ~[pinot-segment-local-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f0
    0e324e2dc]
            at org.apache.pinot.segment.local.segment.index.readers.forward.BaseChunkSVForwardIndexReader.decompressChunk(BaseChunkSVForwardIndexReader.java:137) ~[pinot-segment-local-0.10.0-dev-471.j
    ar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.segment.local.segment.index.readers.forward.BaseChunkSVForwardIndexReader.getChunkBuffer(BaseChunkSVForwardIndexReader.java:118) ~[pinot-segment-local-0.10.0-dev-471.ja
    r:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.segment.local.segment.index.readers.forward.VarByteChunkSVForwardIndexReader.getStringCompressed(VarByteChunkSVForwardIndexReader.java:72) ~[pinot-segment-local-0.10.0-
    dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.segment.local.segment.index.readers.forward.VarByteChunkSVForwardIndexReader.getString(VarByteChunkSVForwardIndexReader.java:61) ~[pinot-segment-local-0.10.0-dev-471.ja
    r:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.segment.local.segment.index.readers.forward.VarByteChunkSVForwardIndexReader.getString(VarByteChunkSVForwardIndexReader.java:35) ~[pinot-segment-local-0.10.0-dev-471.ja
    r:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.dociditerators.SVScanDocIdIterator$StringMatcher.doesValueMatch(SVScanDocIdIterator.java:176) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4
    bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.dociditerators.SVScanDocIdIterator.applyAnd(SVScanDocIdIterator.java:88) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e3
    24e2dc]
            at org.apache.pinot.core.operator.docidsets.AndDocIdSet.iterator(AndDocIdSet.java:128) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.DocIdSetOperator.getNextBlock(DocIdSetOperator.java:67) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.DocIdSetOperator.getNextBlock(DocIdSetOperator.java:38) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.ProjectionOperator.getNextBlock(ProjectionOperator.java:61) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.ProjectionOperator.getNextBlock(ProjectionOperator.java:33) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.transform.PassThroughTransformOperator.getNextBlock(PassThroughTransformOperator.java:48) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3f
    ecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.transform.PassThroughTransformOperator.getNextBlock(PassThroughTransformOperator.java:31) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3f
    ecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator.getNextBlock(AggregationGroupByOrderByOperator.java:107) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.query.AggregationGroupByOrderByOperator.getNextBlock(AggregationGroupByOrderByOperator.java:46) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.combine.GroupByOrderByCombineOperator.processSegments(GroupByOrderByCombineOperator.java:137) ~[pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.operator.combine.BaseCombineOperator$1.runJob(BaseCombineOperator.java:100) [pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at org.apache.pinot.core.util.trace.TraceRunnable.run(TraceRunnable.java:40) [pinot-core-0.10.0-dev-471.jar:0.10.0-dev-471-91c2ebbf297c4bf3fecb5f98413e9f00e324e2dc]
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
            at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
            at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) [guava-30.1.1-jre.jar:?]
            at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) [guava-30.1.1-jre.jar:?]
            at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) [guava-30.1.1-jre.jar:?]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
            at java.lang.Thread.run(Thread.java:834) [?:?]
    h
    r
    • 3
    • 5
  • c

    Chengxuan Wang

    05/16/2022, 3:35 AM
    hello, wondering how I can check if a query is using rangeIndex. I added the index config like following, but from the tracing info, i didn’t see
    RangeIndexBasedFilterOperator
    is used.
    Copy code
    "rangeIndexColumns": [
            "some_column"
          ],
    btw, we are using pinot 0.8
    • 1
    • 4
  • x

    Xiang Fu

    05/16/2022, 4:16 AM
    better check with @Richard Startin, likely that thing is not enabled.
  • a

    Alice

    05/16/2022, 8:25 AM
    Hi, we deployed presto based on the helm file. It seems like offset is not enabled. Any idea how to enable offset?
    k
    h
    x
    • 4
    • 27
  • d

    Dan DC

    05/16/2022, 9:14 AM
    Hello, I've got an issue with a realtime table which is consuming from a topic with 16 partitions. Pinot is consuming from all partitions except 1 and I can't find issues in the logs. Is there a way to force pinot consuming from that partition? I've tried rebalancing the servers and reloading all segments but it still won't consume from this one partition
    s
    k
    +3
    • 6
    • 29
  • a

    Anish Nair

    05/16/2022, 10:31 AM
    Hi Team, Regarding lookup/Dimension Table and array data type use case. We have created a Dimension Table with following schema:
    Copy code
    {
      "schemaName": "test_dim_tags",
      "dimensionFieldSpecs": [
        {
          "name": "id",
          "dataType": "INT"
        },
        {
          "name": "tag_name",
          "dataType": "STRING",
          "singleValueField": false
        }
      ],
      "primaryKeyColumns": [
        "id"
      ]
    }
    Now when we use this table in lookup with Fact Table, query is returning no data or throwing NullPointerExpection. We wanted to use pinot's array explode functionality along with lookup. can someone please help to understand?
    r
    • 2
    • 14
  • s

    Stuart Millholland

    05/16/2022, 6:54 PM
    So I've setup my controller/minions/servers to use a gcs bucket in a gke environment. Is there an easy button way to test that the gcs bucket perms and such are working correctly? I don't have any data yet, so curious if there's a way to test things are working.
    m
    • 2
    • 3
  • a

    Ali Atıl

    05/17/2022, 11:03 AM
    Hello everyone, I am confused about 'segmentIngestionFrequency' property of batchIngestionConfig and 'schedule' property of SegmentGenerationAndPushTask. Are they related to eachother? If so do they overwrite eachother? It would be great if you can share your information with me. Thanks
    n
    h
    • 3
    • 3
  • f

    Fizza Abid

    05/17/2022, 12:01 PM
    I am getting date in this format, used the following schema configuration: "name": "dispatch_date", "dataType": "STRING" With timestamp, it doesn't work.
    m
    h
    • 3
    • 18
  • l

    Lars-Kristian Svenøy

    05/17/2022, 4:12 PM
    Hey everyone. I'm currently using the flink connector (https://github.com/apache/pinot/blob/master/pinot-connectors/pinot-flink-connector/ ) to add segments to an offline table of mine. I want to override all segments on every single run, but I see that my segments are currently being created with timestamps in the segment names (table_name_mintimevalue_maxtimevalue_sequenceId etc). What is the best approach to use to make sure I am always overriding all segments? There's a bunch of configuration options available, but I'm not sure which ones I should be using to achieve this.
    n
    k
    +3
    • 6
    • 82
  • l

    Lars-Kristian Svenøy

    05/18/2022, 10:44 AM
    Hello team 👋 . I have a problem that I'm trying to think through. I added a bunch of thoughts around it in my thread above, but I'll summarise under this thread. Any help appreciated.
    m
    • 2
    • 3
  • l

    Luy

    05/18/2022, 11:41 AM
    Hello team. I'm trying to integrate my pinot data into ThirdEye. I've already uploaded csv data into Pinot dataset, but can't have a way to ingest it in ThirdEye. I tried this in data-sources-config.yml.
    Copy code
    # Please put the mock data source as the first in this configuration.
    dataSourceConfigs:
      - className: org.apache.pinot.thirdeye.datasource.pinot.PinotThirdEyeDataSource
        properties:
          zookeeperUrl: 'localhost:2181'
          clusterName: 'PinotCluster'
          controllerConnectionScheme: 'http'
          controllerHost: '127.0.0.1'
          controllerPort: 9000
          cacheLoaderClassName: org.apache.pinot.thirdeye.datasource.pinot.PinotControllerResponseCacheLoader
        metadataSourceConfigs:
          - className: org.apache.pinot.thirdeye.auto.onboard.AutoOnboardPinotMetadataSource
    And I got an error below.
    Copy code
    2022-05-17 18:26:33.104 [main] INFO  org.apache.pinot.thirdeye.datalayer.util.DaoProviderUtil - Using existing database at 'jdbc:mysql:///thirdeye?autoReconnect=true'
    May 17, 2022 6:26:37 PM org.apache.tomcat.jdbc.pool.ConnectionPool init
    SEVERE: Unable to create initial connections of pool.
    com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
    I asked about this in other channels, but didn't get any answer. Any guidance will be really helpful.
  • k

    Kriti

    05/18/2022, 2:35 PM
    Hi all, I am having trouble running Pinot 0.11. I get the following error, and would appreciate any help
    Copy code
    Failed to start a Pinot [CONTROLLER] at 7.722 since launch
    java.lang.NullPointerException: Cannot invoke "java.lang.reflect.Method.invoke(Object, Object[])" because "com.sun.xml.bind.v2.runtime.reflect.opt.Injector.defineClass" is null
    n
    h
    • 3
    • 15
  • p

    Peter Pringle

    05/19/2022, 3:56 AM
    Having some issues with the RealtimeToOfflineSegmentsTaskExecutor, exception from the logs
    IlleglArgumentExceptionL watermarksMs in RealtimeToOfflineSegmentsTask metadata {"tableNameWithType":"myTable_REALTIME", "watermarksMs":16175808000000} does not match windowStartMs: %d in task configs for table: ....
    m
    h
    • 3
    • 7
  • p

    Peter Pringle

    05/19/2022, 3:57 AM
    Offline table is not getting populated
  • a

    Abhijeet Kushe

    05/19/2022, 6:02 PM
    I am seeing a problem with upsert queries I have all the primary keys on the record having different values but I am only able to see those records with
    option (skipUpsert=true)
    but those records are not returned with
    option (skipUpsert=false)
    I added additional primary keys 10 days ago and the segment was created 2 days back.Any ideas what the issue is ? @Neha Pawar
    h
    j
    +2
    • 5
    • 31
  • n

    Nikhil

    05/20/2022, 9:03 PM
    👋 Attempting to debug an error running pinot spark batch ingestion job. Using pinot 0.10.0 release with jdk 8 built via
    mvn clean install -DskipTests -Pbin-dist -T 4 -Djdk.version=8
    getting this error
    Copy code
    Caused by: shaded.com.fasterxml.jackson.databind.JsonMappingException: Incompatible Jackson version: 2.10.0
    	at shaded.com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:64)
    	at shaded.com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
    	at shaded.com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:808)
    	at org.apache.spark.util.JsonProtocol$.<init>(JsonProtocol.scala:59)
    	at org.apache.spark.util.JsonProtocol$.<clinit>(JsonProtocol.scala)
    	... 32 more
    job spec is here
    Copy code
    executionFrameworkSpec:
      name: 'spark'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner'
      extraConfigs:
        stagingDir: <s3://nikhil-dw-dev/pinot/staging/>
        dependencyJarDir: '<s3://nikhil-dw-dev/pinot/apache-pinot-incubating-0.7.1-bin/plugins>'  
    jobType: SegmentCreation
    inputDirURI: '<s3://nikhil-dw-dev/pinot/pinot_input/>'
    includeFileNamePattern: 'glob:**/*.parquet'
    outputDirURI: '<s3://nikhil-dw-dev/pinot/pinot_output3/>'
    overwriteOutput: true
    pinotFSSpecs:
      -
        className: org.apache.pinot.plugin.filesystem.S3PinotFS
        scheme: s3
        configs:
          region: us-east-1
    recordReaderSpec:
      dataFormat: 'parquet'
      className: 'org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader'
    tableSpec:
      tableName: 'students'
      schemaURI: '<s3://nikhil-dw-dev/pinot/students_schema.json>'
      tableConfigURI: '<s3://nikhil-dw-dev/pinot/students_table.json>'
    ✅ 2
    m
    k
    t
    • 4
    • 10
  • a

    Anish Nair

    05/23/2022, 3:10 PM
    Hi Team, any way to apply filter on Array type column? found out VALUEIN function, but it works with equality only it seems. Can someone please help and advice?
    m
    k
    a
    • 4
    • 41
  • l

    Luis Fernandez

    05/23/2022, 4:52 PM
    hey friends, question i restarted one of the servers and i’m currently getting this error when issuing queries, why would it be an issue if my data has replication: 2 ?
    Copy code
    [
      {
        "message": "java.net.UnknownHostException: pinot-server-1.pinot-server-headless.pinot.svc.cluster.local: Name or service not known\n\tat java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)\n\tat java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)\n\tat java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)\n\tat java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)",
        "errorCode": 425
      },
      {
        "message": "1 servers [pinot-server-1_O] not responded",
        "errorCode": 427
      }
    ]
    • 1
    • 1
1...414243...166Latest