https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • e

    Eaugene Thomas

    03/06/2023, 3:38 PM
    Hey All , I was experimenting with pinot . I have made a pinot table ( consuming from kafka ) with segment flush time as 10 minutes . Generally pinot should not have data loss when consuming from kafka , but in this case I am able to see a loss of some data ( more precisely the endOffset-1 , in some cases endOffset-2 records of segments ) . Do you have any insights why this could happen ? the upstream kafka topic has throughput of 1event/second
    m
    s
    r
    • 4
    • 9
  • m

    Malte Granderath

    03/07/2023, 2:17 PM
    Hey again everyone 👋 In the
    MergeRollupTask
    we are getting the following error. Has anyone experienced this before?
    Copy code
    org.apache.pinot.common.exception.HttpErrorStatusException: Got error status code: 500 (Internal Server Error) with reason: "Failed to update the segment lineage during startReplaceSegments. (tableName = email_events_v0_OFFLINE, segmentsFrom = [email_events_v0_1527074762433_1527074762433_0], segmentsTo = [merged_1year_1678198440011_0_email_events_v0_1527074762433_1527074762433_0])" while sending request: <http://pinot-controller-0.pinot-controller-headless.pinot-ue-research.svc.iad03.k8s.run:9000/segments/email_events_v0/startReplaceSegments?type=OFFLINE&forceCleanup=true> to controller: pinot-controller-0.pinot-controller-headless.pinot-ue-research.svc.iad03.k8s.run, version: Unknown
    	at org.apache.pinot.common.utils.http.HttpClient.wrapAndThrowHttpException(HttpClient.java:442)
    	at org.apache.pinot.common.utils.FileUploadDownloadClient.startReplaceSegments(FileUploadDownloadClient.java:972)
    	at org.apache.pinot.plugin.minion.tasks.SegmentConversionUtils.startSegmentReplace(SegmentConversionUtils.java:144)
    	at org.apache.pinot.plugin.minion.tasks.SegmentConversionUtils.startSegmentReplace(SegmentConversionUtils.java:130)
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.preUploadSegments(BaseMultipleSegmentsConversionExecutor.java:130)
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:241)
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:74)
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.runInternal(TaskFactoryRegistry.java:121)
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.run(TaskFactoryRegistry.java:95)
    	at org.apache.helix.task.TaskRunner.run(TaskRunner.java:75)
    	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    	at java.base/java.lang.Thread.run(Thread.java:829)
    m
    s
    • 3
    • 8
  • e

    Ehsan Irshad

    03/14/2023, 4:32 AM
    Hi Team May I know what is the difference between the below. 1 is working but 2 always fails. 1.
    POST /tables
    Adds a Table 2.
    POST /tableConfigs
    Add the TableConfigs using the tableConfigsStr json
    m
    m
    • 3
    • 13
  • l

    Lvszn Peng

    03/14/2023, 6:30 AM
    Hello, I have a question. Like this SQL statement
    select count(user_id), count(distinct user_id) from table where company_id = 'a'
    if an inverted index is added for user_id, will it speed up the query?
    e
    • 2
    • 2
  • e

    Ehsan Irshad

    03/14/2023, 6:58 AM
    Hi Team. Delete Operation from the console is pretty scary, I believe we should ask user (admin) to write the table name before deletion just to confirm the same table gets deleted that was intended. We ended up deleting a different table due to multiple tables were open in multiple browser tab Should I open the issue for this? cc: @Lee Wei Hern Jason
    n
    • 2
    • 2
  • j

    Jeff Bolle

    03/14/2023, 1:32 PM
    is the ProtoBuf message decoder supposed to handle nested messages? I am importing data from a pulsar topic and while some of the data is present, our nested messages are null
  • r

    Rajat Yadav

    03/14/2023, 2:12 PM
    Hi team, we created one offline table and started pushing data to it. But we deleted the table before all the data gets loaded. Now in the minion there are some tasks which are pending and it is throwing schema not found error. Can anyone please help on how to delete pending tasks in minion?
  • r

    Rajat Yadav

    03/14/2023, 2:13 PM
    Copy code
    Caught exception while executing task: Task_SegmentGenerationAndPushTask_1678797198644_1
    18
    java.lang.RuntimeException: Failed to execute SegmentGenerationAndPushTask
    17
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.executeTask(SegmentGenerationAndPushTaskExecutor.java:120) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    16
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.runInternal(TaskFactoryRegistry.java:111) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    15
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.run(TaskFactoryRegistry.java:88) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    14
    	at org.apache.helix.task.TaskRunner.run(TaskRunner.java:71) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    13
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    12
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    11
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    10
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    9
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    8
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    7
    Caused by: java.lang.IllegalStateException: Failed to find schema for table: discovery_OFFLINE
    6
    	at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:518) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    5
    	at org.apache.pinot.plugin.minion.tasks.BaseTaskExecutor.getTableConfig(BaseTaskExecutor.java:51) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    4
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.generateTaskSpec(SegmentGenerationAndPushTaskExecutor.java:298) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    3
    	at org.apache.pinot.plugin.minion.tasks.segmentgenerationandpush.SegmentGenerationAndPushTaskExecutor.executeTask(SegmentGenerationAndPushTaskExecutor.java:117) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    2
    	... 9 more
    1
    Task: Task_SegmentGenerationAndPushTask_1678797198644_1 completed in: 251ms
    m
    g
    m
    • 4
    • 5
  • r

    Rajat Yadav

    03/14/2023, 2:14 PM
    this is the error i am getting.
  • j

    Jack Luo

    03/14/2023, 6:08 PM
    Hi Team, quick question. Under high throughput scenarios, (i.e. 2PB of data per week across 20 servers), we noticed that Java's direct buffer pool usage is sometimes very high. Capping the direct buffer pool size would lead to OOM (If I remember correctly, stack trace is from Java's NIO library failing to allocate more memory). Given that we are using memory-mapped segment mode for tables, what might be possible operations within Pinot which contribute to such high usage of direct buffer pool usage?
  • s

    suraj sheshadri

    03/14/2023, 7:48 PM
    We are trying to improve the performance of our queries of our OFFLINE table. We have data for multiple countries. If we partition the source data files based on country will that help Or do we need to use functions like
    Modulo
    ,
    Murmur
    to partition the data. Do we have a sample scala code and pinot configuration to achieve this. Thanks.
    m
    m
    • 3
    • 2
  • l

    Laxman Ch

    03/15/2023, 7:36 AM
    Hi Team, We are facing yet another issue while upgrading 0.10.0 to 0.12.0. Just after the upgrade the
    FRESHNESS_LAG_MS
    in pinot-server has gone up high significantly (from couple of seconds to days). However, I don't see any real consumption lag and data is getting ingested to pinot without any delays. We are monitoring these metrics and this is causing false alerts. I'm trying to walk through code to understand this lag metric computation and see if there are any changes after
    0.10
    Please provide any hints/suggestions to troubleshoot this.
    x
    k
    j
    • 4
    • 15
  • l

    Lee Wei Hern Jason

    03/15/2023, 8:29 AM
    Hi Team, I am trying to use the SegmentMetadataPush to push my realtime segments to an offline table. I face the following issue: When i have more than 1 segment tar gz file in the deep store, the job will throw me the exception. However if i have only 1 tar gz file. It doesnt throw me the exception. It always pass for the first tar gz file it looks but fail for the one next. Exception:
    Copy code
    Copy <s3://stg-pinot-archive/stg-bluering-pinot/controller-data/daxFoodSurgeMetric/daxFoodSurgeMetric__3__146__20230311T2306Z.tar.gz> to local /tmp/segmentTar-9232a0fa-2de9-403f-b894-fac9fca8046d.tar.gz
    Trying to untar Metadata file from: [/tmp/segmentTar-9232a0fa-2de9-403f-b894-fac9fca8046d.tar.gz] to [/tmp/segmentMetadataDir-9232a0fa-2de9-403f-b894-fac9fca8046d]
    Trying to untar CreationMeta file from: [/tmp/segmentTar-9232a0fa-2de9-403f-b894-fac9fca8046d.tar.gz] to [/tmp/segmentMetadataDir-9232a0fa-2de9-403f-b894-fac9fca8046d]
    Trying to tar segment metadata dir [/tmp/segmentMetadataDir-9232a0fa-2de9-403f-b894-fac9fca8046d] to [/tmp/segmentMetadata-9232a0fa-2de9-403f-b894-fac9fca8046d.tar.gz]
    Pushing segment: daxFoodSurgeMetric__3__146__20230311T2306Z to location: <http://ct.stg-bluering-pinot.coban.stg-myteksi.com:9000> for table daxFoodSurgeMetric
    Sending request: <http://ct.stg-bluering-pinot.coban.stg-myteksi.com:9000/v2/segments?tableName=daxFoodSurgeMetric> to controller: ip-xx-xxx-xxx-xx.ap-southeast-1.compute.internal, version: Unknown
    Caught temporary exception while pushing table: daxFoodSurgeMetric segment: daxFoodSurgeMetric__3__146__20230311T2306Z to <http://ct.stg-bluering-pinot.coban.stg-myteksi.com:9000>, will retry
    org.apache.pinot.common.exception.HttpErrorStatusException: Got error status code: 500 (Internal Server Error) with reason: "Exception while uploading segment: No enum constant org.apache.pinot.common.utils.FileUploadDownloadClient.FileUploadType.METADATA,METADATA" while sending request: <http://ct.stg-bluering-pinot.coban.stg-myteksi.com:9000/v2/segments?tableName=daxFoodSurgeMetric> to controller: ip-10-110-219-43.ap-southeast-1.compute.internal, version: Unknown
    	at org.apache.pinot.common.utils.http.HttpClient.wrapAndThrowHttpException(HttpClient.java:442) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.common.utils.FileUploadDownloadClient.uploadSegmentMetadata(FileUploadDownloadClient.java:583) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.segment.local.utils.SegmentPushUtils.lambda$sendSegmentUriAndMetadata$2(SegmentPushUtils.java:314) ~[pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.segment.local.utils.SegmentPushUtils.sendSegmentUriAndMetadata(SegmentPushUtils.java:304) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.segment.local.utils.SegmentPushUtils.sendSegmentUriAndMetadata(SegmentPushUtils.java:136) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner.uploadSegments(SegmentMetadataPushJobRunner.java:38) [pinot-batch-ingestion-standalone-0.12.0-shaded.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.plugin.ingestion.batch.common.BaseSegmentPushJobRunner.run(BaseSegmentPushJobRunner.java:149) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:150) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:118) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:130) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202) [pinot-all-0.12.0-jar-with-dependencies.jar:0.12.0-118f5e065cb258c171d97a586183759fbc61e2bf]
    m
    x
    • 3
    • 15
  • m

    Malte Granderath

    03/15/2023, 10:18 AM
    Hey everyone 👋 For running in production we need to configure mTLS for the communication with Zookeeper. I already looked through the source of both Pinot and Helix but I could not find anything related to it. Is there any way to pass through the zookeeper connection options?
    k
    • 2
    • 1
  • r

    Rajat Yadav

    03/15/2023, 1:49 PM
    Hi team, I had one doubt... We are pushing data to an offline table. Is there any way that we can stop pushing data for now and resume the data pushing again later in the same table.
    m
    g
    • 3
    • 4
  • l

    Lee Wei Hern Jason

    03/15/2023, 4:05 PM
    Hi Team, I am facing a bunch of these error logs when im trying to start up a new server process. Any idea why ? After starting, the server CPU is extremely low compared to the rest of my servers.
    Copy code
    [StateModel] [Cleanup thread for stg-mimic-pinot-Server_ip-10-110-223-100.ap-southeast-1.compute.internal_8098-PARTICIPANT] Default reset method invoked. Either because the process longer own this resource or session timedout
    Ei
    x
    • 2
    • 17
  • a

    abhinav wagle

    03/15/2023, 7:05 PM
    Hellos, whats the right way to remove a "Dead" Node from Pinot cluster. Our's is a k8s based cluster. Even though the underlying POD is removed, zoo-keeper still keeps the state of the Node, which we believe is resulting in segment state not being ideal for the table.
    m
    l
    • 3
    • 8
  • r

    Ryan Ivey

    03/16/2023, 2:06 AM
    Hi, can anyone shed some light on the error I'm receiving when running
    pinot-admin.sh RealtimeProvisioningHelper
    I'm providing the
    tableConfigFile
    and
    sampleCompletedSegmentDir
    Copy code
    java.lang.NullPointerException: Name is null
    	at java.base/java.lang.Enum.valueOf(Enum.java:238)
    	at java.base/java.util.concurrent.TimeUnit.valueOf(TimeUnit.java:75)
    	at org.apache.pinot.tools.admin.command.RealtimeProvisioningHelperCommand.execute(RealtimeProvisioningHelperCommand.java:225)
    	at org.apache.pinot.tools.Command.call(Command.java:33)
    	at org.apache.pinot.tools.Command.call(Command.java:29)
    	at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
    	at picocli.CommandLine.access$1300(CommandLine.java:145)
    	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352)
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2346)
    	at picocli.CommandLine$RunLast.handle(CommandLine.java:2311)
    	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
    	at picocli.CommandLine.execute(CommandLine.java:2078)
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171)
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202)
    s
    • 2
    • 8
  • a

    Atul Singh Rajput

    03/16/2023, 7:52 AM
    Hi, We have deleted the table from pinot-server, but data still exists in deleted segment of pinot-controller, How we can resolve it
    l
    r
    • 3
    • 3
  • r

    Rajat Yadav

    03/16/2023, 8:32 AM
    Hi team, I am getting this exception while running a query on pinot from superset.
    Copy code
    Slow query: request handler processing time: 1683, send response latency: 0, total time to handle request: 1683
    how to handle with these slow queries?
    m
    • 2
    • 1
  • r

    Rajat Yadav

    03/16/2023, 8:33 AM
    If we increased the heap space of the servers. Is it going to impact the data we have already??
    m
    • 2
    • 4
  • d

    Deena Dhayalan

    03/16/2023, 2:31 PM
    Can Anyone try start all properly the controller,broker,server with Conf Given in HDFS AS DEEP STORAGE and try starting MULTI STAGE QUERY ENGINE . My Problem : While Staring Server , the broker already binds the pinot-query-runner-port - 8442 somehow (port as presented in doc). So , while starting server it throws query runner port -> Address Already In use Exception. I need to resolve with this both , Can Anyone say How to resolve this issue?
    r
    m
    • 3
    • 18
  • s

    Stuart Millholland

    03/16/2023, 5:36 PM
    quick question for anyone using merge rollup for realtime. Is it possible to build realtime segments using one timestamp field say inserted_timestamp and then rollup in realtime using a different timestamp field in the data say event_timestamp. The reason for this question is we have late arriving data that might be inserted today but the event might be days or weeks old and we want to query on event_timestamp not inserted_timestamp.
  • j

    Joseph Price

    03/17/2023, 12:54 AM
    Hi I see a pr that added support for passing enableLogicalTypes to the Avro reader but can’t seem to find where I should pass that in my realtime table definition
  • j

    Jaden Park

    03/17/2023, 1:34 AM
    Hello, I am currently using Spark 3.3 as EMR on EKS. Looking for jar file for Spark batching ingestion, does the latest version of Pinot (0.12.0) only support Spark 3.2? Do I have to downgrade the Spark version?
    m
    • 2
    • 4
  • e

    Ehsan Irshad

    03/17/2023, 6:48 AM
    Hi Team. I have following questions on inverted index 1.
    autoGeneratedInvertedIndex
    is not documented. If I set this field to true but do not specify the columns, will it generate inverted index on all Dimension Columns? 2. createInvertedIndexDuringSegmentGeneration is documented. But if set it to true does it mean it will try to index all the incoming data while consuming from Kafka source?
  • r

    Rajat Yadav

    03/17/2023, 7:33 AM
    Hi team, we are running a dynamic query on superset to fetch data from pinot through trino. We are using LIKE operator in the query that is making the query to take 40-45 secs to show data. Is there any way that we can use instead of a LIKE operator. Data size is 400 million This is the query:
    Copy code
    SELECT sum("Jobs")
    FROM
      (WITH fromCompany as
         (Select count(*) JobsTotal
          from pinot.default.table1
          where '1'='1'
            AND jobTitle LIKE ('<http://%.NET|%.NET> Programmer%')
            AND standard_country IN ('United States')) select *
       from fromCompany) AS virtual_table
    LIMIT 100000;
    s
    • 2
    • 3
  • r

    Rajat Yadav

    03/17/2023, 7:35 AM
    Line no. 7 is added to the query on the basis of filter applied. Thats where the response time increases...
  • r

    Rajat Yadav

    03/17/2023, 8:19 AM
    When I am trying to delete resource in pinot through swagger. it is throwing error...
    Copy code
    {
      "code": 409,
      "error": "Failed to drop instance Broker_broker-39.dna-pinot-1-broker-headless..svc.cluster.local_8099 - Instance Broker_broker-39.dna-pinot-1-broker-headless.svc.cluster.local_8099 exists in ideal state for brokerResource"
    }
  • r

    Rajat Yadav

    03/17/2023, 8:19 AM
    We need to delete these brokers as trino are passing the query to dea brokers also and it is giving query..
1...737475...166Latest