Xiang Fu
Arnav
10/28/2025, 9:46 AMSELECT * FROM table
WHERE customer_id = 1234
AND msisdn IN ( ..1000 msisdns)
Query 2:
SELECT * FROM table
WHERE customer_id = 1234
AND msisdn IN ( ..350 msisdns)
UNION ALL
SELECT * FROM append_iot_session_events
WHERE customer_id = 1234
AND msisdn IN (..350 msisdns)
UNION ALL
SELECT * FROM append_iot_session_events
WHERE customer_id = 1234
AND msisdn IN (..300 msisdns)robert zych
10/29/2025, 2:44 PMMatt Nawara
10/30/2025, 12:26 PMAll metrics must have aggregation configs.I feel like it is at the heart of what we are seeing now; in essence, you can't update the schema with a new metric, as the API says:
PUT schema response: {'code': 400, 'error': 'Invalid schema: staging_stream_st_mknaw_idle_worker_test14_sg_12. Reason: Schema is incompatible with tableConfig with name: staging_stream_st_mknaw_idle_worker_test14_sg_12_REALTIME and type: REALTIME'}
and, probably correctly, the other way around, trying to get the table update in before the schema update, also does not work
PUT table response: {'code': 400, 'error': "Invalid table config: staging_stream_st_mknaw_idle_worker_test14_sg_12 with error: The destination column 'mtr_clicks_sum' of the aggregation function must be present in the schema"}
so... is the implication that a pinot schema/table pair that has ingestion aggregation can.. never evolve? this would be unfortunate.Gerald Bonfiglio
10/30/2025, 5:08 PMmg
11/04/2025, 10:44 AMArnav
11/11/2025, 4:25 AM"task": {
"taskTypeConfigsMap": {
"UpsertCompactionTask": {
"schedule": "0 0 */4 ? * *",
"bufferTimePeriod": "1h",
"invalidRecordsThresholdPercent": "0",
"invalidRecordsThresholdCount": "1",
"validDocIdsType": "SNAPSHOT"
}
}
},
Its taking too much time how can i optimise it?Satya Mahesh
11/12/2025, 10:22 AMRANJITH KUMAR
11/14/2025, 11:05 AMSuresh PERUML
11/14/2025, 3:57 PMXiang Fu
Arnav
11/17/2025, 6:10 AMQosimjon Mamatqulov
11/18/2025, 10:51 AMSan Kumar
11/20/2025, 11:20 AMEric Wohlstadter
11/20/2025, 9:43 PMArnav
11/24/2025, 6:26 AM"task": {
"taskTypeConfigsMap": {
"UpsertCompactionTask": {
"schedule": "0 */5 * ? * *",
"bufferTimePeriod": "1h",
"invalidRecordsThresholdCount": "1",
"tableMaxNumTasks": "40",
"validDocIdsType": "SNAPSHOT"
},
"UpsertCompactMergeTask": {
"schedule": "0 0 */1 ? * *",
"bufferTimePeriod": "1m",
"maxNumSegmentsPerTask": "100",
"maxNumRecordsPerSegment": "50000000"
}
}
}Prateek Garg
11/24/2025, 12:01 PMpinot.grpc.port
I'd appreciate clarification on the following points:
• Does the gRPC port mentioned in Trino documentation refer to Pinot Server's gRPC port, or Broker's gRPC API port?
• If it refers to Pinot Server's port, do we have any mechanism to make it work for our configuration, where there are multiple server instances per machine having different gRPC ports?
Documents for Reference:
https://trino.io/docs/current/connector/pinot.html#grpc-configuration-properties
https://docs.pinot.apache.org/users/api/broker-grpc-apiRANJITH KUMAR
11/26/2025, 8:52 AMRANJITH KUMAR
11/26/2025, 8:54 AMArnav
12/01/2025, 5:59 AMArnav
12/01/2025, 9:19 AMSenthil Kumar
12/01/2025, 12:45 PMRANJITH KUMAR
12/02/2025, 6:41 PMrobert zych
12/03/2025, 11:23 PMExpression cycle problem when using an ingest transform function where the source field name matches the pinot column name? In my case I have geojson that has a geometry field containing coordinates ({"geometry": {"coordinates": [-82.41959120338983, 35.61020539194915]}} ). I'm looking for a way to convert those coordinates using toSphericalGeography(stPoint(lon, lat)) into a column also named geometry.Alexander Maniates
12/04/2025, 8:39 PM{
"routing": {
"instanceSelectorType": "strictReplicaGroup"
}
}
and also numInstancesPerPartition=1
With this is mind, is it still possible to scale up the number of servers horizontally once the table is set up? And what is the process then to do so?Shrusti Patel
12/08/2025, 7:55 AMRANJITH KUMAR
12/08/2025, 8:37 AMRishabh Sharma
12/09/2025, 6:09 AMAlexander Maniates
12/09/2025, 8:55 PMmetadataTTL is set, do primary keys then expire and free up memroy at the end of the data TTL (set by the segmentsConfig retention settings)? Or do primary keys live forever in this case?
2. Should metadataTTL always be set to longer than the segment config retention setting? I imagine if you went to update a record that fell out of the metadataTTL, but the old segment data was still around, you could have duplicate data at that point?Padmini
12/10/2025, 12:28 PM