https://pinot.apache.org/ logo
Join Slack
Powered by
# general
  • j

    James Shao

    03/27/2019, 8:34 PM
    kk, got it
  • s

    Shireen Nagdive

    04/01/2019, 11:11 PM
    Hi Team, I was trying to understand a few patterns. What should be expected pattern for the 99% latency If QPS increases ?
  • r

    Ravi

    04/01/2019, 11:20 PM
    Given a cluster with N broker and M servers, 99th percentile latency usually follows a hockey stick pattern with respect to QPS. We just did these experiments internally for some use cases. The latency very slowly increases with QPS and then at a particular inflection point the latency shoots up exponentially.
  • r

    Ravi

    04/01/2019, 11:20 PM
    something like this
  • r

    Ravi

    04/01/2019, 11:21 PM
    image.png
  • r

    Ravi

    04/01/2019, 11:21 PM
    this is just a sample tos how how it might look
  • r

    Ravi

    04/01/2019, 11:21 PM
    And also assumes that query patterns are fixed
  • s

    Sunitha

    04/01/2019, 11:24 PM
    Once the servers are maxed out (CPU wise), the latency shoots up like how Ravi mentioned; most of it is due to queueing delays; you will most likely see requests hitting the broker configured timeout (30s by default) and plateau there
  • s

    Shireen Nagdive

    04/01/2019, 11:30 PM
    Thanks..I am a student currently and we had implemented a new segment assignment strategy and published a paper on the same last year. I am trying to run the experiments and get a pattern of 99% latency on increasing QPS, the latency is decreasing.
  • s

    Shireen Nagdive

    04/01/2019, 11:31 PM
    Trying to figure out why.. some leads would be appreciated
  • m

    Mayank

    04/01/2019, 11:32 PM
    How are you measuring the latency? Also, could you provide details about your experiment: eg: cluster setup (how many servers/brokers), data size, query pattern, how are you firing queries, qps, latency observed etc?
  • s

    Shireen Nagdive

    04/01/2019, 11:35 PM
    1 Broker, 1 Server, 30K records, QPS (100,200,300), latency observed was 32 ms maximum
  • m

    Mayank

    04/01/2019, 11:35 PM
    How long do you let it run for each qps?
  • s

    Shireen Nagdive

    04/01/2019, 11:36 PM
    5 minutes
  • m

    Mayank

    04/01/2019, 11:36 PM
    And you are observing latency at 300qps < latency at 200|100qps?
  • s

    Shireen Nagdive

    04/01/2019, 11:37 PM
    Yes
  • m

    Mayank

    04/01/2019, 11:38 PM
    What's the actual latency numbers?
  • m

    Mayank

    04/01/2019, 11:38 PM
    With such small data size, may be you need to increase qps quite a bit
  • m

    Mayank

    04/01/2019, 11:39 PM
    Also, make sure that each query is getting correct response.
  • m

    Mayank

    04/01/2019, 11:40 PM
    How are you sending the queries? Do you have query runner that is firing queries in parallel (multiple threads)?
  • s

    Shireen Nagdive

    04/01/2019, 11:41 PM
    Sorry the record size is 1198388
  • m

    Mayank

    04/01/2019, 11:41 PM
    1.19M records? What's the data size in MB?
  • s

    Shireen Nagdive

    04/01/2019, 11:41 PM
    Yes we are executing queries parallelly, we have a query generator
  • m

    Mayank

    04/01/2019, 11:42 PM
    What's the actual latency numbers?
  • m

    Mayank

    04/01/2019, 11:44 PM
    Typically, if latency is going down over time, it is because of jvm optimizations kicking in, and/or data fully in memory. Also possible that queries are not being run (eg connection refused)
  • m

    Mayank

    04/01/2019, 11:45 PM
    I'd recommend making sure that the latency numbers are real (ie queries actually executing), and if so, then keep increasing qps until latency starts degrading
  • s

    Shireen Nagdive

    04/01/2019, 11:49 PM
    Yea..Give me a minute.. I will give the latency numbers
  • k

    Kishore G

    04/02/2019, 12:48 AM
    Can you point us to the paper
  • s

    Shireen Nagdive

    04/02/2019, 1:16 AM
    Sure
  • s

    Shireen Nagdive

    04/02/2019, 1:17 AM
    https://ieeexplore.ieee.org/document/8416407
1...737475...160Latest