https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • j

    Jatin Kumar

    08/07/2022, 1:43 PM
    Hello Have we removed
    participant
    folder from zookeeper in latest release?
    m
    • 2
    • 3
  • s

    Stuart Millholland

    08/07/2022, 3:23 PM
    I'm going to pin this here for Monday. We are using the realtime to offline segments task and our realtime table has this confiuration for the job:
    m
    • 2
    • 11
  • e

    Ethan Huang

    08/08/2022, 7:38 AM
    Hi team, I’m using presto with pinot and basic auth is enabled in pinot. I ran into some issues, could you please help me? • got ‘bad request ’ error when ‘pinot.use-streaming-for-segment-queries=true’ configured in presto. • got ‘Error when hitting host Server_pinot-server-4.pinot-server-headless.pinot.svc.cluster.local_8098’ when ‘pinot.use-streaming-for-segment-queries=false’ configured in presto. It seems that these errors only occur when presto communicating with pinot servers directly. Queries can be pushed down to pinot brokers are working well. I think it is related to auth. Is there a way to configure the correct auth token for presto communicating with pinot servers? My pinot version is 0.11.0-SNAPSHOT and built in July by myself. Presto docker image is 0.276-SNAPSHOT-e4d8032aac-20220722. Thank you.
    m
    x
    • 3
    • 11
  • h

    harnoor

    08/08/2022, 5:49 PM
    Hi folks, I see that segments for a particular table are not being relocated to offline servers from realtime servers. For the particular table, I can see that
    Relocation failed for table
    as segments are in error state. We have used the Reset API for all the segments which were in BAD state. But trying to understand why this error occurs? (No table config change was made from our end and segments were being relocated successfully earlier).
    Copy code
    2022/08/08 16:22:18.328 INFO [PinotLLCRealtimeSegmentManager] [grizzly-http-server-1] Committing segment metadata for segment: span_event_view_1__91__4093__20220808T1558Z
    --
    java.lang.IllegalStateException: Found segments in ERROR state
    	at org.apache.pinot.controller.helix.core.rebalance.TableRebalancer.isExternalViewConverged(TableRebalancer.java:556) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at org.apache.pinot.controller.helix.core.rebalance.TableRebalancer.waitForExternalViewToConverge(TableRebalancer.java:498) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at org.apache.pinot.controller.helix.core.rebalance.TableRebalancer.rebalance(TableRebalancer.java:361) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at org.apache.pinot.controller.helix.core.relocation.SegmentRelocator.lambda$processTable$0(SegmentRelocator.java:96) ~[pinot-all-0.9.1-jar-with-dependencies.jar:0.9.1-f8ec6f6f8eead03488d3f4d0b9501fc3c4232961]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    2022/08/08 17:10:25.590 ERROR [SegmentRelocator] [restapi-multiget-thread-2037] Relocation failed for table: span_event_view_1_REALTIME
    m
    • 2
    • 1
  • l

    Luis Fernandez

    08/08/2022, 6:27 PM
    Are there any logs around queries that may exceed the default timeout of 10s? if so where are they
    m
    • 2
    • 9
  • s

    Sukesh Boggavarapu

    08/08/2022, 7:04 PM
    I am trying to run a join between a real time table and dimension table. I am getting an error saying
    • 1
    • 6
  • s

    Sukesh Boggavarapu

    08/08/2022, 7:04 PM
    Copy code
    aged/pinot-pinot-broker-1[broker]: 2022/08/08 19:03:22.866 ERROR [PinotClientRequest] [jersey-server-managed-async-executor-2] Caught exception while processing POST request
    managed/pinot-pinot-broker-1[broker]: java.lang.IllegalArgumentException: Unsupported function: lookup not found
    managed/pinot-pinot-broker-1[broker]: 	at org.apache.pinot.core.query.postaggregation.PostAggregationFunction.<init>(PostAggregationFunction.java:47) ~[startree-pinot-all-0.11.0-ST.23-jar-with-dependencies.jar:0.11.0-ST.23-f32ac7b496c8e8576415b40228127b5c64ead9fc]
    managed/pinot-pinot-broker-1[broker]: 	at org.apache.pinot.core.query.reduce.PostAggregationHandler$PostAggregationValueExtractor.<init>(PostAggregationHandler.java:164) ~[startree-pinot-all-0.11.0-ST.23-jar-with-dependencies.jar:0.11.0-ST.23-f32ac7b496c8e8576415b40228127b5c64ead9fc]
    managed/pinot-pinot-broker-1[broker]: 	at org.apache.pinot.core.query.reduce.PostAggregationHandler.getValueExtractor(PostAggregationHandler.java:136) ~[startree-pinot-all-0.11.0-ST.23-jar-with-dependencies.jar:0.11.0-ST.23-f32ac7b496c8e8576415b40228127b5c64ead9fc]
    managed/pinot-pinot-broker-1[broker]: 	at org.apache.pinot.core.query.reduce.PostAggregationHandler.<init>(PostAggregationHandler.java:77) ~[startree-pinot-all-0.11.0-ST.23-jar-with-dependencies.jar:0.11.0-ST.23-f32ac7b496c8e8576415b40228127b5c64ead9fc]
    managed/pinot-pinot-broker-1[broker]: 	at org.apache.pinot.core.query.reduce.GroupByDataTableReducer.reduceToResultTable(GroupByDataTableReducer.java:133) ~[startree-pinot-all-0.11.0-ST.23-jar-with-
  • g

    Gerrit van Doorn

    08/08/2022, 8:07 PM
    Hi folks, I’m trying to figure out what would be the best option for us to backfill some data into an offline table. standalone is not an option as it involves a lot of data. Remaining options: minions or Spark. Do minions generate 1 segment per input file? The reason I ask Is that the offline data currently is stored in files with 100K max documents, it would be better to increase that number. Data is also not completely in order so there would be potential for data loss (I’m assuming). In spark, how are segments generated? How is the size determined?
    m
    k
    • 3
    • 12
  • a

    Ankit Sultana

    08/08/2022, 8:26 PM
    Hi, is there a way to convert a string to a long via the ingestion transformation config? I was hoping that I'd be able to do something like the following but I keep running into an error:
    Copy code
    "ingestionConfig": {
          "transformConfigs": [{
            "columnName": "timestamp_start_int",
            "transformFunction": "cast(\"timestamp_start\", \"LONG\")"
          }]
        },
    m
    s
    • 3
    • 5
  • s

    Sukesh Boggavarapu

    08/08/2022, 11:00 PM
    star tree index is not enabled by default right? Any reason for that?
    m
    • 2
    • 3
  • t

    Tanmesh Mishra

    08/09/2022, 2:19 AM
    👋 Hello folks, Currently, I am working on this issue and need some help in generating thrift sources 🧵
    m
    • 2
    • 7
  • a

    Alice

    08/09/2022, 7:18 AM
    Hey team, I’ve a partial upsert table and got the following error when building the first segment. Any idea how to solve it?
    Copy code
    2022/08/09 03:00:48.603 ERROR [LLRealtimeSegmentDataManager_table_stage__2__0__20220809T0117Z] [table_stage__2__0__20220809T0117Z] Could not build segment
    java.lang.IllegalArgumentException: integer overflow detected
    	at org.apache.pinot.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.segment.creator.impl.fwd.MultiValueVarByteRawIndexCreator.<init>(MultiValueVarByteRawIndexCreator.java:80) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.segment.creator.impl.DefaultIndexCreatorProvider.getRawIndexCreatorForMVColumn(DefaultIndexCreatorProvider.java:251) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.segment.creator.impl.DefaultIndexCreatorProvider.newForwardIndexCreator(DefaultIndexCreatorProvider.java:85) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.spi.index.IndexingOverrides$Default.newForwardIndexCreator(IndexingOverrides.java:156) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.segment.creator.impl.SegmentColumnarIndexCreator.init(SegmentColumnarIndexCreator.java:215) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl.build(SegmentIndexCreationDriverImpl.java:216) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.segment.local.realtime.converter.RealtimeSegmentConverter.build(RealtimeSegmentConverter.java:123) ~[pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.buildSegmentInternal(LLRealtimeSegmentDataManager.java:873) [pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.buildSegmentForCommit(LLRealtimeSegmentDataManager.java:800) [pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:699) [pinot-all-0.11.0-SNAPSHOT-jar-with-dependencies.jar:0.11.0-SNAPSHOT-b58810ccf2c7d18693d01688769dcccd3e761d4b]
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    2022/08/09 03:00:48.607 ERROR [LLRealtimeSegmentDataManager_table_stage__2__0__20220809T0117Z] [table_stage__2__0__20220809T0117Z] Could not build segment for table_stage__2__0__20220809T0117Z
    ✅ 1
    s
    • 2
    • 8
  • j

    James Kelleher

    08/09/2022, 8:44 PM
    Hello! I was following along with the Kubernetes tutorial, and set up Pinot by doing
    helm install pinot pinot/pinot
    . This worked great, I was even able to set up my own Realtime table consuming from one of our Confluent Kafka queues. Now, I want to play around on a beefier deployment, so I cloned the Helm charts into my own repo. I can still
    helm install
    these, but when I try to create the table, I get this error:
    Copy code
    │ 2022/08/09 20:42:54.461 INFO [AddTableCommand] [main] Executing command: AddTable -tableConfigFile /var/pinot/dmp/dmp_realtime_table_config.json -schemaFile /var/pinot/dmp/dmp_realtime_schema.json -co │
    │ 2022/08/09 20:42:55.238 INFO [AddTableCommand] [main] {"code":500,"error":"org.apache.pinot.shaded.org.apache.kafka.common.KafkaException: Failed to construct kafka consumer"}                          │
    │ Stream closed EOF for pinot/dmp-job-zhb2d (pinot-add-dmp-json)
    Why isn’t Pinot able to construct the Kafka consumer anymore? Am I not deploying two near-identical setups?
    m
    • 2
    • 6
  • s

    suraj sheshadri

    08/10/2022, 4:12 AM
    I am running below spark command in cluster mode…its taking too long in last step to copy files from staging to output directory and it is doing one file at a time.. any suggestion on how to improve the performance as for 8000 files it taking more than 10 hours just in last step from staging to output directory.. spark-submit --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand --master yarn --deploy-mode cluster --conf spark.yarn.am.waitTime=1000s --conf spark.sql.parquet.fs.optimized.committer.optimization-enabled=true --conf parquet.enable.summary-metadata=false --conf spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2 --conf spark.sql.hive.convertMetastoreParquet.mergeSchema=false --conf spark.sql.shuffle.partitions=2000 --conf “spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins” --conf “spark.driver.extraClassPath=pinot-batch-ingestion-spark-2.4-${PINOT_VERSION}-SNAPSHOT-shaded.jar:pinot-all-${PINOT_VERSION}-SNAPSHOT-jar-with-dependencies.jar:pinot-s3-${PINOT_VERSION}-SNAPSHOT-shaded.jar:pinot-parquet-${PINOT_VERSION}-SNAPSHOT-shaded.jar” --conf “spark.executor.extraClassPath=pinot-batch-ingestion-spark-2.4-${PINOT_VERSION}-SNAPSHOT-shaded.jar:pinot-all-${PINOT_VERSION}-SNAPSHOT-jar-with-dependencies.jar:pinot-s3-${PINOT_VERSION}-SNAPSHOT-shaded.jar:pinot-parquet-${PINOT_VERSION}-SNAPSHOT-shaded.jar” --jars “${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-SNAPSHOT-jar-with-dependencies.jar,${PINOT_DISTRIBUTION_DIR}/plugins-external/pinot-batch-ingestion/pinot-batch-ingestion-spark-2.4/pinot-batch-ingestion-spark-2.4-${PINOT_VERSION}-SNAPSHOT-shaded.jar,${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-s3/pinot-s3-${PINOT_VERSION}-SNAPSHOT-shaded.jar,${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-SNAPSHOT-shaded.jar” --files s3://roku-dea-dev/sand-box/suraj/spark_job_spec_offlinebookingnarrow_perf.yaml local://pinot-all-${PINOT_VERSION}-SNAPSHOT-jar-with-dependencies.jar -jobSpecFile spark_job_spec_offlinebookingnarrow_perf.yaml
    m
    k
    • 3
    • 4
  • m

    Mathieu Alexandre

    08/10/2022, 10:58 AM
    Hi 👋, i retrieved a few days ago a realtime table (pinot 9.0.2 with a kafka stream config) in an unhealty ingestion status with this error :
    Unable to get consuming segments info from all the servers. Reason: null
    I've some segments that persist in OFFLINE even if a reset or reload them. The segment metadatas seems to be stuck with the status "IN_PROGRESS". Does anyone have any ideas on how to handle this situation ?
    n
    • 2
    • 7
  • r

    Ryan Ruane

    08/10/2022, 3:42 PM
    If I were interested in probing each member of the cluster for its state of readiness, what would be the best approach? At the moment I'm thinking of probing zookeeper with netcat to see if the server command gives back a result:
    Copy code
    echo srvr | nc localhost 2181
    Zookeeper version: 3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT
    Latency min/avg/max: 0/0/20
    Received: 5590
    Sent: 5751
    Connections: 7
    Outstanding: 0
    Zxid: 0x176
    Mode: standalone
    Node count: 80
    And from there using the following command to ensure the correct number of brokers controllers and servers exist
    Copy code
    docker exec -it <container> bin/pinot-admin.sh ShowClusterInfo -clusterName PinotCluster -zkAddress localhost:2181
    _brokerInfoList:
    - _name: Broker_172.28.0.5_8099
      _state: ONLINE
      _tags: [DefaultTenant_BROKER]
    _clusterName: PinotCluster
    _controllerInfo: {_leaderName: 172.28.0.3_9000}
    _serverInfoList:
    - _name: Server_172.28.0.4_8098
      _state: ONLINE
      _tags: [DefaultTenant_OFFLINE, DefaultTenant_REALTIME]
    _tableInfoList:
    - _segmentInfoList:
      - _name: cases_OFFLINE_20150105_20150106_0
        _segmentStateMap: {Server_172.28.0.4_8098: ONLINE}
      _tableName: cases_OFFLINE
      _tag: cases_OFFLINE
    Still, I don't know if this command will produce instances before they are fully ready. I was hoping there was a more direct way to probe each instance individually. Any suggestions would be greatly welcomed.
    j
    • 2
    • 2
  • a

    Alice

    08/11/2022, 2:59 AM
    Hey team, I added a new column and then clicked reload all segments. But still got the following error when running queries. Any idea how to fix it?
    m
    r
    h
    • 4
    • 13
  • t

    Tony Zhang

    08/11/2022, 5:28 AM
    @Kishore G Could I know the zookeeper cluster deployment suggestion? we will have 50W segments for 4 tables. thanks.
    k
    m
    • 3
    • 28
  • s

    Sevvy Yusuf

    08/11/2022, 2:49 PM
    Hi all 👋🏼 I'm seeing weird behaviour in our cluster. We recently changed one of our tables from only REALTIME to only OFFLINE. When I try to reload the segments now in swagger, calling the reload endpoint with the raw table name results in
    Copy code
    {
      "code": 500,
      "error": "Specified EXTERNALVIEW the_specified_table_REALTIME is not found!"
    }
    Can someone advise on why this is? This is only in our staging environment and there is currently no data in this table. How do we fix the state? Thanks in advance
  • l

    Luis Fernandez

    08/11/2022, 3:48 PM
    has anyone ever encountered random 503s in your pinot setup? we get them every once in a while just trying to understand and debug what may be happening since the dashboards are not telling me anything out of the extraordinary, we get this 503s sometimes when we hit the broker, all this setup is kubernetes so we have a loadbalancer on top of it, and have 2 brokers running, any ideas?
    m
    j
    • 3
    • 27
  • s

    Scott deRegt

    08/11/2022, 4:16 PM
    ❓Re: Spark Batch Ingestion. Is there a recommended pattern for monitoring the health of spark batch ingestion job? I'm seeing semi-regular
    org.apache.spark.SparkException
    in our ingestion job which leads to staging segments being purged and no output in
    outputDirURI
    . See this error in logs,
    ERROR [LaunchDataIngestionJobCommand] [Driver] Got exception to kick off standalone data ingestion job
    , but Spark application exits as success,
    INFO [ApplicationMaster] [Driver] Final app status: SUCCEEDED, exitCode: 0
    • 1
    • 1
  • a

    Alice

    08/11/2022, 4:19 PM
    Hi team, I added a new column, added it in the startree index, and then reload all segments. Table status is good and consuming status is good. But when running queries in the query console, the following warn showed. Should I just ignore this warn, or does it actually have effect on data ingestion?
    Copy code
    There are 45 invalid segment/s. This usually means that they were created with an older schema. Please reload the table in order to refresh these segments to the new schema.
    r
    j
    n
    • 4
    • 5
  • t

    Timothy James

    08/11/2022, 11:05 PM
    Strange problem: "successful" segment uploads are no longer being reflected in Pinot query results:
    select count(*)
    is not changing, even though in our own logs we see
    Copy code
    Successfully uploaded segment: simple_0_443 of table: simple_OFFLINE
    and logging that shows us >0 record count values for each segment . It was working before (that is, before I turned on pinot controller auth), as is stuck at
    757051
    count and
    7653
    segments. How is this even possible? I've tried reloading all segments for the table. Help?
    m
    • 2
    • 2
  • h

    Hassan Ait Brik

    08/12/2022, 9:04 AM
    Copy code
    2022/08/12 08:19:44.332 WARN [ZkBaseDataAccessor] [ZkClient-EventThread-62-pinot-zookeeper:2181] Fail to read record for paths: {/pinot/INSTANCES/Server_pinot-server-0.pinot-server-headless.data.svc.cluster.local_8098/MESSAGES/6549d816-93d8-42a0-8476-b636e2e1566b=-101, /pinot/INSTANCES/Server_pinot-server-0.pinot-server-headless.data.svc.cluster.local_8098/MESSAGES/37d924db-e540-4f87-b300-9e17bfe7d0db=-101}
  • h

    Hassan Ait Brik

    08/12/2022, 9:04 AM
    Hi guys. I have some issues querying Pinot ...
    Copy code
    [
      {
        "message": "java.net.UnknownHostException: pinot-server-0.pinot-server-headless.data.svc.cluster.local\n\tat java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)\n\tat java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)",
        "errorCode": 425
      },
      {
        "message": "2 servers [pinot-server-0_O, pinot-server-2_O] not responded",
        "errorCode": 427
      }
    ]
    for some reason pinot-server-0 is not responded (it seems to be in a "bad phase" from yesterday 6pm) it seems to have a recurring problem - As if it were "restarting again and again"
    d
    • 2
    • 1
  • h

    Hassan Ait Brik

    08/12/2022, 2:20 PM
    RESOLVED
    👍 1
  • b

    Bruno Mendes

    08/12/2022, 10:20 PM
    Hi folks, I created a hybrid table, when I first ingest batch data and query it, it works fine. Then if I send kafka messages to the realtime table, Pinot query results will only show realtime rows and ignore the batched data. It is supposed to show both realtime and batch data, right? I looked the logs for controller, server and brokers but no clue. What I maybe missing?
    👍 1
    n
    • 2
    • 3
  • t

    Tony Zhang

    08/12/2022, 11:20 PM
    Could I know the max number of segments one table can support? As I know there is a ZNODE size limitation.
    m
    • 2
    • 1
  • p

    Prashant Pandey

    08/13/2022, 4:20 AM
    Hi team, I am trying to optimise the realtime ingestion of a table with the following config:
    Copy code
    Input Partitions: 96
    Average Ingestion Rate (30 days): 154k
    Max Ingestion Rate (30 days): 390k (spike)
    Avg. ingestion rate / partitions ~2000 (200,000 / 96 with some buffer)
    Retention Hours (on realtime servers): 1h, segments are relocated post this to OFFLINE servers.
    Max Usable Host Memory: 100G (128G total, 28G for query processing)
    Most Queried Time Interval: 1h (hence retention = 1h)
    Command:
    Copy code
    ./pinot-admin.sh RealtimeProvisioningHelper -tableConfigFile /var/pinot/tableConfig.json -sampleCompletedSegmentDir /var/pinot/mySegment/ -numPartitions 96 -numHosts 4,6,8,10,12 -numHours 1 -ingestionRate 2000 -maxUsableHostMemory 100G -retentionHours 1
    Results:
    Copy code
    numHosts --> 4               |6               |8               |10              |12              |
    numHours
     1 --------> 46.53G/46.53G   |31.02G/31.02G   |23.26G/23.26G   |19.39G/19.39G   |15.51G/15.51G   |
    
    numHosts --> 4               |6               |8               |10              |12              |
    numHours
     1 --------> 1.68G           |1.68G           |1.68G           |1.68G           |1.68G           |
    
    numHosts --> 4               |6               |8               |10              |12              |
    numHours
     1 --------> 46.53G          |31.02G          |23.26G          |19.39G          |15.51G          |
    
    numHosts --> 4               |6               |8               |10              |12              |
    numHours
     1 --------> 24              |16              |12              |10              |8               |
    Wanted to understand why is memory being mapped when total consuming memory + mapped memory < total memory available for ingestion in all the four cases?
    m
    s
    • 3
    • 3
  • a

    Alice

    08/15/2022, 2:10 AM
    Hi team, I see suggested segment size is between 100MB and 500MB. But, in my case, based on daily data size, 500MB segment is generated every 15min per partition. I see no other way to reduce segment number beyond increasing segment size. Could you please recommend some?
    m
    x
    v
    • 4
    • 12
1...525354...166Latest