Laxman Ch
08/17/2020, 4:25 PMElon
08/17/2020, 7:38 PM<http://pinot.controller.storage.factory.class.gs|pinot.controller.storage.factory.class.gs>=org.apache.pinot.plugin.filesystem.GcsPinotFS
pinot.controller.storage.factory.gs.projectId=<YOUR PROJECT ID>
pinot.controller.storage.factory.gs.gcpKey=<GCS KEY>
pinot.controller.segment.fetcher.protocols=file,http,gs
pinot.controller.segment.fetcher.gs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
Elon
08/17/2020, 9:45 PMElon
08/17/2020, 11:04 PMKishore G
Suvodeep Pyne
08/19/2020, 1:40 AMGroupByResultSet
when reading alerts from TE in a docker container in k8s. Any idea what could be the root cause? This error doesn’t happen when running TE on bare metal.Laxman Ch
08/19/2020, 6:57 PMElon
08/19/2020, 10:38 PMYash Agarwal
08/20/2020, 9:16 AMselect channel,
sales_date,
sum(sales) as sum_sales,
sum(units) as sum_units
from pinot.default.sales
group by channel, sales_date
Currently presto is trying to fetch raw values for all the columns.Yash Agarwal
08/20/2020, 9:17 AMMayank
Kishore G
Kishore G
Elon
08/21/2020, 4:54 PMElon
08/21/2020, 7:34 PMPradeep
08/24/2020, 5:50 PMYash Agarwal
08/25/2020, 6:11 PMselect channel,
sales_date,
sum(sales) as sum_sales,
sum(units) as sum_unts
from pinot.default.guestSlsLitm
where channel = 'WEB'
group by channel, sales_date
union all
select channel,
sales_date,
sum(sales) as sum_sales,
sum(units) as sum_units
from pinot.default.guestSlsLitm
where channel = 'STORES'
group by channel, sales_date;
Each of the individual queries work separately. but the union does not.
Even the explain plan fails with
Query 20200825_180515_00000_mshj8 failed: Expected to find the pinot table handle for the scan node
com.facebook.presto.spi.PrestoException: Expected to find the pinot table handle for the scan node
at com.facebook.presto.pinot.PinotPlanOptimizer.lambda$optimize$0(PinotPlanOptimizer.java:86)
Mustafa
08/26/2020, 8:27 AMSELECT "timestamp",variant_id FROM Sales WHERE operator_id = 1 AND campaign_id = 1 GROUP BY Hour("timestamp"), variant_id
Yash Agarwal
08/26/2020, 10:47 AMlocal.directory.sequence.id=true
in SparkSegmentGenerationJobRunner
?Elon
08/26/2020, 5:35 PMElon
08/27/2020, 12:39 AMYash Agarwal
08/28/2020, 2:25 PM0.5.0-SNAPSHOT-331b874cd-20200821
. But in that I am not able to access the swagger ui. All apis for swaggerui-dist/lib/*
and swaggerui-dist/css/*
are failing with 404.Ankit
08/28/2020, 2:49 PM{
"tableName": "ordereventmap_OFFLINE",
"reportedSizeInBytes": -1,
"estimatedSizeInBytes": -1,
"offlineSegments": {
"reportedSizeInBytes": -1,
"estimatedSizeInBytes": -1,
"missingSegments": 1,
"segments": {
"ordereventmapbatch": {
"reportedSizeInBytes": -1,
"estimatedSizeInBytes": -1,
"serverInfo": {
"Server_pinot-server-59619160-2-966153609.stg.omsanalyticsplatform.cp.dfwstg2.prod.walmart.com_8098": {
"segmentName": "ordereventmapbatch",
"diskSizeInBytes": -1
}
}
}
}
},
"realtimeSegments": null
have been trying to upload batch segment to table….but the segment is going missing and no data is available. Can anyone tell reasons due to which segments goes missing on uploading? Didn’t find any errors in controller or server or brokerYash Agarwal
08/28/2020, 3:04 PMYash Agarwal
09/01/2020, 11:33 AMException processing requestId 137
java.lang.RuntimeException: Caught exception while running CombinePlanNode.
at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:149) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:33) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:221) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:155) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:139) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_265]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_265]
at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) [pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) [pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) [pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_265]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_265]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_265]
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205) ~[?:1.8.0_265]
at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:139) ~[pinot-all-0.5.0-SNAPSHOT-jar-with-dependencies.jar:0.5.0-SNAPSHOT-701ffcbd5be5f39e91cea9a0297c4e8b0a7d9343]
... 13 more
Processed requestId=137,table=guestslslitm3years_OFFLINE,segments(queried/processed/matched/consuming)=1058/-1/-1/-1,schedulerWaitMs=0,reqDeserMs=4,totalExecMs=10659,resSerMs=0,totalTimeMs=10663,minConsumingFreshnessMs=-1,broker=Broker_10.59.100.47_8099,numDocsScanned=-1,scanInFilter=-1,scanPostFilter=-1,sched=fcfs
Slow query: request handler processing time: 10663, send response latency: 58, total time to handle request: 10721
Is there a reason why this is happening ? Is there a way we can override the timeout of 10s in CombineNodePlanYash Agarwal
09/01/2020, 11:37 AMNeha Pawar
"segmentsConfig":{
"timeColumnName":"timestampInEpoch",
Dileep Reddy
09/02/2020, 9:22 AMDan Hill
09/03/2020, 1:57 AM<s3://minio:9000/mybucket/objectpath>
or <s3://mybucket.minio:9000/objectpath>
?
2.. What's the best way to add the pinot-s3
plugin with apachepinot/pinot
docker image? Do I need to create my own wrapping image? I see a few environment variables in pinot-admin.sh
that I can set. E.g. JAVAOPTS
, PLUGINS_INCLUDE
, PLUGINS_CLASSPATH
, PLUGINS_DIR
Shen Wan
09/03/2020, 6:25 PMtimestamp_ns
which is the segmentsConfig.timeColumnName
and has range index. When I used order by timestamp_ns
in an SQL query, it will time out. Why? I expect range index to help order by
.