Priyank Bagrecha
10/28/2022, 7:52 AMXiang Fu
schedulerWaitMs=501,reqDeserMs=1,totalExecMs=72,
how many qpsPriyank Bagrecha
10/28/2022, 8:55 AMPriyank Bagrecha
10/28/2022, 8:56 AMPriyank Bagrecha
10/28/2022, 9:04 AMXiang Fu
schedulerWaitMs
will go upXiang Fu
Priyank Bagrecha
10/28/2022, 9:04 AMXiang Fu
Xiang Fu
Priyank Bagrecha
10/28/2022, 9:05 AMPriyank Bagrecha
10/28/2022, 9:05 AMLee Wei Hern Jason
10/28/2022, 10:04 AM-javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=8008:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml -Xms2G -Xmx2G -Dlog4j2.configurationFile=conf/log4j2.xml -Dpinot.admin.system.exit=true -Dplugins.dir=/opt/pinot/plugins
However when i view the JMX metrics, i can’t see any of the metrics stated here.
Did anyone encountered this issue before ?Priyank Bagrecha
10/28/2022, 5:47 PMPriyank Bagrecha
10/28/2022, 5:57 PMsum by (table) (rate(pinot_broker_queries_Count[10m]))
Priyank Bagrecha
10/28/2022, 5:57 PMavg by (table) (pinot_broker_queryExecution_50thPercentile)
and p75, p95, p99 and p999 as wellXiang Fu
Priyank Bagrecha
10/28/2022, 6:46 PMNickel Fang
10/29/2022, 11:00 AM{
"code": 500,
"error": "org.apache.pinot.spi.stream.TransientConsumerException: org.apache.pinot.shaded.org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata"
}
the streamConfig is as below
"streamConfigs": {
"streamType": "kafka",
"stream.kafka.consumer.type": "lowlevel",
"stream.kafka.topic.name": "test",
"stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.json.JSONMessageDecoder",
"stream.kafka.decoder.prop.projectId": "1",
"stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"stream.kafka.broker.list": "localhost:9092",
"stream.kafka.consumer.prop.auto.offset.reset": "smallest",
"realtime.segment.flush.threshold.time": "2h",
"realtime.segment.flush.threshold.rows": "0",
"realtime.segment.flush.threshold.segment.size": "300M",
"realtime.segment.flush.autotune.initialRows": "10000"
}
shivam
10/31/2022, 11:06 AMLee Wei Hern Jason
10/31/2022, 11:20 AMJAVA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9002 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
Lee Wei Hern Jason
10/31/2022, 11:21 AMharnoor
10/31/2022, 1:35 PMexplain plan for
for my queries, instead I am getting ACQUIRE_RELEASE_COLUMNS_SEGMENT
for all the queries. I am expecting filter index to be picked in the output as per the docs: https://docs.pinot.apache.org/users/user-guide-query/explain-planStuart Millholland
10/31/2022, 7:46 PMPriyank Bagrecha
10/31/2022, 11:48 PMMamlesh
11/01/2022, 3:17 AMAlice
11/01/2022, 9:04 AM"realtime.segment.flush.threshold.time": "6h",
"realtime.segment.flush.threshold.rows": "0",
"realtime.segment.flush.threshold.segment.size": "20M",
Sumit Khaitan
11/01/2022, 12:23 PMCan Pinot directly read and ingest those minutely files from Azure Blob Storage or there has to be a Spark/ETL pipeline that needs to ingest the data to Pinot ?
Abhishek Dubey
11/01/2022, 2:18 PM{
"code": 400,
"error": "Backward incompatible schema <name>. Only allow adding new columns"
}
What is the way to make schema backward compatible and allow updates to schema ?Priyank Bagrecha
11/01/2022, 9:25 PMCurrently, we have two different mechanisms to prune segments on the broker side to minimize the number of segment for processing before scatter-and-gather.
But I only see partitioning
. What is the second mechanism?