Mahesh babu
08/24/2022, 4:55 AMPiyush Chauhan
08/24/2022, 5:03 AMid
(primary key in schema) and updated exception_count
and updated_at
it created a new record (1st row).
When querying from Query Console is the the attached results where for the given id.
But when I query using the Java client I get on the 1 record i.e. older record whose updated at value 1660654106.
*All of the columns in the ss are actual columns. They are not derived.Luis Fernandez
08/24/2022, 8:50 PMpinot-server-1
at that point pinot-server-1
was getting scaled up and pinot-server-0
was working without issue and serving stuff after a bit when pinot-server-1
was coming back up we started getting the following error:
[
{
"message": "java.net.UnknownHostException: pinot-server-1.pinot-server-headless.pinot.svc.cluster.local\n\tat java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)\n\tat java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)",
"errorCode": 425
},
{
"message": "2 servers [pinot-server-0_R, pinot-server-1_R] not responded",
"errorCode": 427
}
]
and also
2022-08-24 16:47:32
java.net.UnknownHostException: pinot-server-1.pinot-server-headless.pinot.svc.cluster.local
2022-08-24 16:47:32
Caught exception while sending request 183945048 to server: pinot-server-1_R, marking query failed
what i’m trying to understand is how queries were getting routed to pinot-server-1 if it was down, after a bit this problem resolves itself without us doing anything but we did get some downtime.Slackbot
08/25/2022, 7:16 AMPrashant Pandey
08/25/2022, 7:54 AMcontroller.helix.cluster.name=myCluster
controller.port=9000
controller.data.dir=/tmp/controller
controller.zk.str=myZKStr
pinot.set.instance.id.to.hostname=true
And it doesn’t seem to be trying to connect with S3 anymore. Wanted to confirm if this is the right way to do this? Thanks!Jinny Cho
08/25/2022, 7:15 PMSELECT SUM(a)
FROM my_table
WHERE b = c
GROUP BY d;
If not, is there any plan to enable pagination for Group by query? It'd be really nice if it has it.suraj sheshadri
08/26/2022, 1:18 AMDeena Dhayalan
08/23/2022, 1:20 PMorg.apache.pinot.*.function.*
But In the Doc It is org.apache.pinot.scalar.XXXX
Can Anyone say Is there a way of auto register package when the package is this 'org.apache.pinot.scalar.XXXX'?
Or org.apache.pinot.*.function.* this will be the actual package?
Lars-Kristian Svenøy
08/26/2022, 9:12 AM구상모Sangmo Koo
08/29/2022, 6:49 AMTable Config :
"ingestionConfig": {
"filterConfig": {
"filterFunction": "Groovy({ ts == null || ts < 1000000000 || exp == null}, exp, ts)"
}
}
Table Schema :
{
"name": "exp_utc",
"dataType": "LONG",
"transformFunction": "exp*1000",
"format": "1:MILLISECONDS:EPOCH",
"granularity": "1:MILLISECONDS"
},
{
"name": "exp_asia_seoul_datetime",
"dataType": "STRING",
"transformFunction": "toDateTime((exp*1000)+(timezoneHour('Asia/Seoul')*3600000), 'yyyy-MM-dd HH:mm:ss')",
"format": "1:SECONDS:EPOCH",
"granularity": "1:SECONDS"
}
Normal collection data :
{"id":"E8DB","tp":1,"fw":"1.5.0","vc":22,"ts":1661175231,"ri":-59,"ad":3,"exp":1678450474,"ar":{},"mg":{"1":0.2,"2":0.2,"3":0.2}}
filter data :
{"id":"240A","tp":1,"fw":{"1":{"1":-25,"2":-10,"3":-4}},"ar":{"0":1},"mg":{"1":0.2,"2":0.2,"3":0.2}}
The following error occurs because the 'exp' filter does not work.
Caused by: java.lang.RuntimeException: Caught exception while executing function: plus(times(exp,'1000'),times(timezoneHour('Asia/Seoul'),'3600000'))
at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:124) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:119) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
... 15 more
Caused by: java.lang.RuntimeException: Caught exception while executing function: times(exp,'1000')
at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:124) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:119) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
at org.apache.pinot.segment.local.function.InbuiltFunctionEvaluator$FunctionExecutionNode.execute(InbuiltFunctionEvaluator.java:119) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-e23f213cf0d16b1e9e086174d734a4db868542cb]
... 15 more
Caused by: java.lang.IllegalStateException: Caught exception while invoking method: public static double org.apache.pinot.common.function.scalar.ArithmeticFunctions.times(double,double) with arguments: [null, 1000.0]
Any problem with this filterFunction?
"filterFunction": "Groovy({ ts == null || ts < 1000000000 || exp == null}, exp, ts)"
Abdelhakim Bendjabeur
08/29/2022, 9:41 AMJAVA_OPTS:-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-controller.log -Dplugins.dir=/opt/pinot/plugins -Dplugins.include=pinot-gcs -Dlog4j2.configurationFile=/opt/pinot/conf/log4j2.xml -Dplugins.dir=/opt/pinot/plugins
StartController -configFileName /var/pinot/controller/config/pinot-controller.conf
$ cat /var/pinot/controller/config/pinot-controller.conf
controller.helix.cluster.name=pinot-quickstart
controller.port=9000
controller.data.dir=/var/pinot/controller/data
controller.zk.str=pinot-zookeeper:2181
pinot.set.instance.id.to.hostname=true
controller.task.scheduler.enabled=true
controller.data.dir=<gs://pinot-quickstart-deep-storage/data>
controller.local.temp.dir=/temp
controller.enable.split.commit=true
<http://pinot.controller.storage.factory.class.gs|pinot.controller.storage.factory.class.gs>=org.apache.pinot.plugin.filesystem.GcsPinotFS
pinot.controller.storage.factory.gs.projectId=some-project-id
pinot.controller.storage.factory.gs.gcpKey=/var/pinot/controller/config/gcp-key.json
pinot.controller.segment.fetcher.protocols=file,http,gs
pinot.controller.segment.fetcher.gs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
The plugin is there
$ ls /opt/pinot/plugins/pinot-file-system/
pinot-adls pinot-gcs pinot-hdfs pinot-s3
When I check the propertystore.segments in zookeeper, I see that the download url is not a gcs url
"segment.download.url": "<http://pinot-controller-0.pinot-controller-headless.pinot-quickstart.svc.cluster.local:9000/segments/tickets_channels/tickets_channels__2__0__20220826T0933Z>",
Did anyone succeed at configuring this? am I missing something?
Thanks a lot for your help 🙏Ruthika Vasudevan
08/29/2022, 4:18 PMGET/instances/{instanceName}
gives certain info. Looking to specifically get instance state. Anyone here tried something similar?Tanmesh Mishra
08/29/2022, 5:23 PMPetr Ponomarenko
08/29/2022, 9:48 PMNeeraja Sridharan
08/29/2022, 11:29 PMreplicaGroupPartitionConfig
, any recommendation on how to distribute the (50) servers assignment to (16) partitions for (2) Replica Groups i.e. recommended value for numInstancesPerPartition
?
"segmentPartitionConfig": {
"columnPartitionMap": {
"team_id": {
"functionName": "Murmur",
"numPartitions": 16
}
}
}
"replicaGroupPartitionConfig": {
"replicaGroupBased": true,
"numInstances": 0,
"numReplicaGroups": 2,
"numInstancesPerReplicaGroup": 25,
"numPartitions": 0,
"numInstancesPerPartition": 0
}
Diogo Baeder
08/30/2022, 9:27 PMAlice
08/31/2022, 1:08 AMDiogo Baeder
08/31/2022, 12:15 PMNaz Karnasevych
08/31/2022, 2:57 PM<http://pinot.controller.storage.factory.class.gs|pinot.controller.storage.factory.class.gs>
Ethan Huang
08/31/2022, 3:15 PMvalue > 0
but got results with value = 0
Is this an issue or should I make some special configuration to avoid this?
The query is:
select clock, value from application_metrics where metric = 'successCount' and statistic='duration' and value > 0 limit 10
clock
is a time column with TIMESTAMP type and value
is a metric column with DOUBLE type, the others are dimension columns. The pinot version is 0.11.0-SNAPSHOT and built form source with last commit hash 561e471a86278e0e486cd9e602f8499fc8f8517c
I also ran this query using the broker query api, and also got the wrong results.
Screenshots from the pinot UI and rest api tool are attached.
Thank you very much!suraj sheshadri
08/31/2022, 9:30 PMWe have a usecase where we need to use common table expressions or subqueries to achieve below use case.
1) Is there any alternate way in pinot to achieve the same at this time. We do not want to use presto/trino etc at this time.
2) Do we know when subquery / CTE support is planned for pinot.
3) Do we know when the feature to insert rows into pinot using a query will be made available. Do we have a way to create temp tables? So we can use these tables to query next step.
4) Do we have a way to output data of a query to s3 location. I only see INSERT INTO "baseballStats" FROM FILE '<s3://my-bucket/public_data_set/baseballStats/rawdata/>' in documentation.
weekly_agg as (
Select
user, sum(req_daily_cap) as req_cnt,
sum(total_day_cnt) as total_week_cnt
from fcap_day_agg
Group by user
)
With fcap_weekly_agg as (
Select
user,
IF (weeklyFCAP >0, Min(req_cnt, weeklyFCAP), req_cnt ) as req_weekly_cap,
total_week_cnt
from weekly_agg
)
Select sum (req_weekly_cap)/sum(total_week_cnt) as fcap_factor from fcap_weekly_agg
Padma Malladi
08/31/2022, 9:45 PMPadma Malladi
08/31/2022, 9:46 PMsuraj sheshadri
09/01/2022, 2:58 AMJuraj Pohanka
09/01/2022, 9:01 AMpinot.server.query.executor.max.execution.threads
set to -1, and thus have no upper limit on the threads used on a single query. Has anyone experienced similar behavior?Jinny Cho
09/01/2022, 4:08 PMHAVING
clause (here). However when I try it, I receive the following error message. The query I tried was the same as what was written in the doc. Is this something new that we need to upgrade?
Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered "HAVING" at line 11, column 1.
Was expecting one of:
<EOF>
"LIMIT" ...
"OFFSET" ...
...
Padma Malladi
09/01/2022, 5:43 PMsuraj sheshadri
09/02/2022, 1:14 AMDeena Dhayalan
09/02/2022, 5:35 AMStuart Millholland
09/02/2022, 1:41 PM