Pavel Stejskal
02/17/2022, 11:29 AMShadab Anwar
02/17/2022, 12:58 PMJames Mnatzaganian
02/17/2022, 4:37 PMbad
state. I tried reloading the segments, but they remained in the bad
state. I'm assuming that I'll need to recompute those segments. What made matters worse is that the bad segments made it such that the table was completely unusable.
My questions are:
1. Is it expected that loading a bad segment into a table would make the table unusable? Or is this just because I was initializing it, so it never reached a good
state? My hope is that a bad segment would simply be cutoff, allowing for queries to still run just not over the bad segments.
2. Is there a way to be certain that an offline segment is "good" and that when it's loaded the table will remain in the good
state (before loading, during load, and after load)? If not, then it seems like the only way to work around this is to have a dummy table that's used to first validate the segment, before loading it into the table; otherwise, the issue in (1) would result in an otherwise healthy table becoming unusable.Prashant Pandey
02/17/2022, 5:04 PMrealtime.segment.flush.threshold.time
to 2h. This is its streaming config:
"streamConfigs": {
"streamType": "kafka",
//other configs skipped for brevity
"realtime.segment.flush.threshold.time": "2h",
"stream.kafka.consumer.prop.auto.offset.reset": "smallest"
}
I am expecting segments to commit every 2h but here’re the last few commits:
"raw_trace_view__3__918__20220217T1600Z": {
"Server_server-realtime-5.server-realtime-headless.pinot.svc.cluster.local_8098": "ONLINE"
},
"raw_trace_view__3__919__20220217T1616Z": {
"Server_server-realtime-5.server-realtime-headless.pinot.svc.cluster.local_8098": "ONLINE"
},
"raw_trace_view__3__920__20220217T1631Z": {
"Server_server-realtime-5.server-realtime-headless.pinot.svc.cluster.local_8098": "ONLINE"
},
"raw_trace_view__3__921__20220217T1647Z": {
"Server_server-realtime-5.server-realtime-headless.pinot.svc.cluster.local_8098": "CONSUMING"
}
Why are segments not being committed as expected? Do I need to do anything else after editing the config?kaivalya apte
02/17/2022, 5:20 PMLuis Fernandez
02/17/2022, 8:34 PMreplicationPerPartition
if I do 1 does that mean no replication or like one more copy of the data if I do 2 is original data + 2 more?kaivalya apte
02/18/2022, 10:25 AMTYPE
field for an event. I have followed the steps here https://docs.pinot.apache.org/basics/data-import/upsert.
• set the primary key on the schema (unnested column from incoming json field)
• have the upsertConfig set to
"upsertConfig": {
"mode": "PARTIAL",
"partialUpsertStrategies": {
"type": "OVERWRITE"
},
"defaultPartialUpsertStrategy": "OVERWRITE"
}
• I also have the instanceSelectorType set
"routing": {
"segmentPrunerTypes": [
"partition"
],
"instanceSelectorType": "strictReplicaGroup"
}
• To test this, I produced an event with status1
with the same primary key and republished the event with same id with status2
. I expected to see only one record for this event with an updated (OVERWRITE) status2
, but I see two events one with status1
and other with status2
.
• What am I missing?Shivam Sajwan
02/21/2022, 7:05 AMkaivalya apte
02/21/2022, 3:20 PMCaught 'java.net.SocketTimeoutException: Read timed out' while executing GET on URL: <http://analytics-pinot-server-0.analytics-pinot-server.email-pinot.svc.test01.k8s.run:8097/table/pinots_REALTIME/size>
Connection error
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Read timed out
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:?]
at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
at org.apache.pinot.controller.util.CompletionServiceHelper.doMultiGetRequest(CompletionServiceHelper.java:79) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.controller.api.resources.ServerTableSizeReader.getSegmentSizeInfoFromServers(ServerTableSizeReader.java:69) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.controller.util.TableSizeReader.getTableSubtypeSize(TableSizeReader.java:181) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.controller.util.TableSizeReader.getTableSizeDetails(TableSizeReader.java:101) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.controller.api.resources.TableSize.getTableSize(TableSize.java:83) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at jdk.internal.reflect.GeneratedMethodAccessor818.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33
f]
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:353) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?]
Also I see some zk disconnections:
Consumed 0 events from (rate:0.0/s), currentOffset=894238198, numRowsConsumedSoFar=555294, numRowsIndexedSoFar=555294
[Consumer clientId=consumer-null-2, groupId=null] Seeking to offset 894406661 for partition email-raw-events-62
[Consumer clientId=consumer-null-53, groupId=null] Seeking to offset 894229286 for partition email-raw-events-98
Consumed 0 events from (rate:0.0/s), currentOffset=894328031, numRowsConsumedSoFar=645137, numRowsIndexedSoFar=645137
[Consumer clientId=consumer-null-52, groupId=null] Seeking to offset 894221230 for partition email-raw-events-86
[Consumer clientId=consumer-null-58, groupId=null] Seeking to offset 894238198 for partition email-raw-events-26
[Consumer clientId=consumer-null-43, groupId=null] Seeking to offset 894338250 for partition email-raw-events-68
zookeeper state changed (Disconnected)
I don’t see any issues on the ZK cluster. Any pointers ?Aditya
02/21/2022, 4:14 PMpinot.server.segment.fetcher.protocols
Is it possible to use s3 deep store with minions? What is the config for this?Peter Pringle
02/22/2022, 1:33 AMYeongju Kang
02/22/2022, 5:37 AMcurl -XDELETE localhost:9000/instances/Server_pinot-server-2.pinot-server-headless.dev-pinot.svc.cluster.local_8098
{"_code":409,"_error":"Failed to drop instance Server_pinot-server-2.pinot-server-headless.dev-pinot.svc.cluster.local_8098 - Instance Server_pinot-server-2.pinot-server-headless.dev-pinot.svc.cluster.local_8098 exists in ideal state for user2_REALTIME"}
• What will happen if i update zk's idealstate of all tables related to server-2 to server-1? (table status became healthy again)
• Will also there be automatic copy based on other segment to maintain replica desire?Deepak Mishra
02/22/2022, 8:51 AMKISHORE B R
02/22/2022, 9:30 AMkaivalya apte
02/22/2022, 1:11 PMCluster manager: Broker_email-analytics-pinot-broker-1.email-analytics-pinot-broker.email-pinot.svc.test01.k8s.run_8099 disconnected
Failed to start Pinot Broker
org.apache.helix.HelixException: Cluster structure is not set up for cluster: email-analytics
at org.apache.helix.manager.zk.ZKHelixManager.handleNewSession(ZKHelixManager.java:1124) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.helix.manager.zk.ZKHelixManager.createClient(ZKHelixManager.java:701) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.helix.manager.zk.ZKHelixManager.connect(ZKHelixManager.java:738) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.broker.broker.helix.BaseBrokerStarter.start(BaseBrokerStarter.java:209) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.service.PinotServiceManager.startBroker(PinotServiceManager.java:143) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:92) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand$1.lambda$run$0(StartServiceManagerCommand.java:276) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:302) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand$1.run(StartServiceManagerCommand.java:276) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
Failed to start a Pinot [BROKER] at 0.691 since launch
org.apache.helix.HelixException: Cluster structure is not set up for cluster: email-analytics
at org.apache.helix.manager.zk.ZKHelixManager.handleNewSession(ZKHelixManager.java:1124) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.helix.manager.zk.ZKHelixManager.createClient(ZKHelixManager.java:701) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.helix.manager.zk.ZKHelixManager.connect(ZKHelixManager.java:738) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.broker.broker.helix.BaseBrokerStarter.start(BaseBrokerStarter.java:209) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.service.PinotServiceManager.startBroker(PinotServiceManager.java:143) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:92) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand$1.lambda$run$0(StartServiceManagerCommand.java:276) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:302) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
at org.apache.pinot.tools.admin.command.StartServiceManagerCommand$1.run(StartServiceManagerCommand.java:276) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-8bbf93aa4377dbdf597e7940670893330452b33f]
Shutting down Pinot Service Manager with all running Pinot instances...
Shutting down Pinot Service Manager admin application...
Deregistering service status handler
Luis Fernandez
02/22/2022, 8:44 PMShailesh Jha
02/23/2022, 4:16 AMError opening zip file or JAR manifest missing : /opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Thanks Team
@Daniel LavoieElon
02/23/2022, 5:04 AMAyush Kumar Jha
02/23/2022, 2:45 PMpinot.broker.timeoutMs
is greater than pinot.server.query.executor.timeout
what will be the actual timeout of the query it will be of the server or of the broker??kaivalya apte
02/23/2022, 3:39 PMINCREMENT
upsert config type using something like
"upsertConfig": {
"mode": "PARTIAL",
"partialUpsertStrategies": {
"countOfEvents": "INCREMENT",
"type": "OVERWRITE"
},
"defaultPartialUpsertStrategy": "OVERWRITE",
"hashFunction": "MURMUR3"
},
On upserts I see that the type
field was overwritten however countOfEvents
didn’t increment
. Am I missing something?Vibhor Jaiswal
02/24/2022, 4:57 PM2022/02/23 16:50:56.586 ERROR [PinotTableIdealStateBuilder] [grizzly-http-server-0] Could not get PartitionGroupMetadata for topic: gsp.dataacquisition.risk.public.v2.<Redacted> of table: <Redacted>_REALTIME
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
2022/02/23 16:50:56.591 ERROR [PinotTableRestletResource] [grizzly-http-server-0] org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
java.lang.RuntimeException: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
at org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGroupMetadataList(PinotTableIdealStateBuilder.java:172) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-428e7d75f91b9d4b4a2288f131d02d643bb2df5d]
at org.apache.pinot.controller.helix.core.realtime.PinotLLCRealtimeSegmentManager.getNewPartitionGroupMetadataList(PinotLLCRealtimeSegmentManager.java:764)
Below is the table config for reference -
{
"tableName": "<Redacted>",
"tableType": "REALTIME",
"segmentsConfig": {
"schemaName": "<Redacted>",
"timeColumnName": "PublishDateTimeUTC",
"allowNullTimeValue": false,
"replication": "1",
"replicasPerPartition": "2",
"completionConfig":{
"completionMode":"DOWNLOAD"
}
},
"tenants": {
"broker": "DefaultTenant",
"server": "DefaultTenant",
"tagOverrideConfig": {}
},
"tableIndexConfig": {
"invertedIndexColumns": [],
"noDictionaryColumns": ["some columns "],
"rangeIndexColumns": [],
"rangeIndexVersion": 1,
"autoGeneratedInvertedIndex": false,
"createInvertedIndexDuringSegmentGeneration": false,
"sortedColumn": [],
"bloomFilterColumns": [],
"loadMode": "MMAP",
"streamConfigs": {
"streamType": "kafka",
"stream.kafka.topic.name": "gsp.dataacquisition.risk.public.v2.<Redacted>",
"stream.kafka.broker.list": "comma separated list of servers",
"stream.kafka.consumer.type": "lowlevel",
"stream.kafka.consumer.prop.auto.offset.reset": "largest",
"stream.kafka.schema.registry.url": <http://someaddress:8081>,
"stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"stream.kafka.sasl.mechanism": "SCRAM-SHA-256" ,
"stream.kafka.security.protocol": "SASL_PLAINTEXT" ,
"stream.kafka.sasl.jaas.config":"org.apache.kafka.common.security.scram.ScramLoginModule required username=\"some user\" password=\"somepwd\"",
"stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
"realtime.segment.flush.threshold.rows": "0",
"realtime.segment.flush.threshold.size":"0",
"realtime.segment.flush.threshold.time": "24h",
"realtime.segment.flush.autotune.initialRows": "3000000",
"realtime.segment.flush.threshold.segment.size": "500M"
},
"onHeapDictionaryColumns": [],
"varLengthDictionaryColumns": [],
"enableDefaultStarTree": false,
"enableDynamicStarTreeCreation": false,
"aggregateMetrics": false,
"nullHandlingEnabled": false
},
"metadata": {},
"quota": {},
"routing": {"instanceSelectorType": "strictReplicaGroup"},
"query": {},
"ingestionConfig": {},
"isDimTable": false,
"upsertConfig": {
"mode": "FULL",
"comparisonColumn": "PublishDateTimeUTC"
},
"primaryKeyColumns": [
"BusinessDate","UID","UIDType","LegId"
]
}
Elon
02/24/2022, 9:45 PMcontroller.enable.batch.message.mode
to true? I see a github issue from pinot 0.2.0 that switched it to false by default due to high controller gc. Do you think it's safe to enable now? Pinot has evolved a lot since then. 🙂sunny
02/25/2022, 12:48 AMDiogo Baeder
02/25/2022, 2:44 PMoption(timeoutMs=60000)
after a table name to increase the timeouts, but the problem is, I'm using the SQLAlchemy library in my Python project and haven't yet found a way to make it compile that into the query. Is there some other way to increase the timeouts on a per-query basis? Something like a SET TIMEOUT=60000
query I can execute before my normal query?Diogo Baeder
02/25/2022, 5:53 PMDiogo Baeder
02/25/2022, 8:54 PMrequestId=14,table=<redacted>,timeMs=545,docs=259503/9327428,entries=3080570/1038012,segments
this means that a query took 545ms to yield a result? Or does it just mean that the broker processed the query in that time and then sent the data queries to the servers? I'm asking this because to get all the data into my application (+ SQLAlchemy processing time) it took about 40s, so I'm wondering where all that time is being spent... (I might just do some profiling on my side, but I'm asking here because I want to have a better understanding of the logs I get from Pinot)Diogo Baeder
02/27/2022, 3:56 PMAditya
02/28/2022, 9:52 AMLuis Fernandez
03/02/2022, 2:52 PMLuis Fernandez
03/02/2022, 3:57 PMSUM(impression_count)
in the return types in the json, I see this
"columnDataTypes": [
"DOUBLE"
]
but impression_count is an int, why is the columnDataType a double? is there any way to fix it in the query? thank you!