prasanna
08/23/2024, 5:29 AMAdil Shaikh
08/23/2024, 9:50 AMPramiti
08/23/2024, 11:27 AMZhuangda Z
08/24/2024, 2:57 AMZhuangda Z
08/24/2024, 4:03 PMSlackbot
08/26/2024, 10:34 AMprasanna
08/26/2024, 1:25 PMSandeep R
08/26/2024, 6:30 PMraghav
08/27/2024, 10:45 AM2024/08/27 10:41:33.110 ERROR [ClusterStateVerifier] [pool-3-thread-1] Table drift_execution_history_REALTIME is not stable. numUnstablePartitions: 18
Nathan
08/27/2024, 11:42 AMpinot-java-client-1.2.0.jar
?Vũ Lê
08/28/2024, 3:18 AMApoorv Upadhyay
08/28/2024, 6:38 AM"dateTimeFieldSpecs": [
{
"name": "order_date",
"dataType": "LONG",
"defaultNullValue": 0,
"format": "1:MILLISECONDS:EPOCH",
"granularity": "1:SECONDS"
}}
I was expecting column to have default value as 0 but its long.MIN_VALUE, any possible reason for it ?Anand Kr Shaw
08/29/2024, 6:57 AMMrityunjay Sharma
08/29/2024, 11:14 AMcontroller:
extra:
configs: |-
pinot.set.instance.id.to.hostname=true
controller.task.scheduler.enabled=true
pinot.controller.segment.fetcher.protocols=file,http,s3
pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
pinot.controller.storage.factory.s3.region=us-east-1
controller.helix.cluster.name=pinot
controller.data.dir=s3://{s3-bucket-name}
controller.local.temp.dir=/tmp/pinot-tmp-data/
controller.enable.split.commit=true
pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
pinot.controller.storage.factory.s3.accessKey={access-key}
pinot.controller.storage.factory.s3.secretKey={secret-key}
pinot.controller.storage.factory.s3.disableAcl=false
Server:
extra:
configs: |-
pinot.set.instance.id.to.hostname=true
pinot.server.instance.realtime.alloc.offheap=true
pinot.query.server.port=7321
pinot.query.runner.port=7732
pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
pinot.server.segment.fetcher.protocols=file,http,s3
pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
pinot.server.instance.enable.split.commit=true
pinot.server.storage.factory.s3.httpclient.maxConnections=50
pinot.server.storage.factory.s3.httpclient.socketTimeout=30s
pinot.server.storage.factory.s3.httpclient.connectionTimeout=2s
pinot.server.storage.factory.s3.httpclient.connectionTimeToLive=0s
pinot.server.storage.factory.s3.httpclient.connectionAcquisitionTimeout=10s
pinot.use-streaming-for-segment-queries=true
realtime.segment.serverUploadToDeepStore = true
pinot.server.storage.factory.s3.region=us-east-1
pinot.server.instance.dataDir=s3://{s3-bucket-name}
pinot.server.instance.segmentTarDir=/tmp/pinot-tmp/server/segmentTars
pinot.server.storage.factory.s3.disableAcl=false
pinot.server.storage.factory.s3.endpoint=s3://{s3-bucket-name}
pinot.server.segment.store.uri=s3://{s3-bucket-name}
pinot.server.instance.segment.store.uri=s3://{s3-bucket-name}
pinot.server.storage.factory.s3.accessKey={access-key}
pinot.server.storage.factory.s3.secretKey={secret-key}
Bruno Mendes
08/29/2024, 2:02 PMDor Levi
08/29/2024, 4:59 PMBruno Mendes
08/29/2024, 5:33 PMUPDATING
for more than one day, how should I troubleshoot to discover what is happening?Dor Levi
08/30/2024, 11:01 PM"traceInfo": {}
Do we need to turn on some other flags as well?meshari aldossari
08/30/2024, 11:20 PMAnand Kr Shaw
08/31/2024, 8:19 AM...
jvmOpts: "-javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent.jar=8008:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml -Xms256M -Xmx1G"
I am following this document but somehow the port is not up.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pinot-controller
namespace: pinot
spec:
serviceName: "pinot-controller"
replicas: 1
selector:
matchLabels:
app: pinot-controller
template:
metadata:
labels:
app: pinot-controller
annotations:
<http://prometheus.io/scrape|prometheus.io/scrape>: "true"
<http://prometheus.io/port|prometheus.io/port>: "8008"
<http://prometheus.io/path|prometheus.io/path>: "/metrics"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: <http://app.kubernetes.io/name|app.kubernetes.io/name>
operator: In
values:
- zookeeper
topologyKey: "<http://kubernetes.io/hostname|kubernetes.io/hostname>"
containers:
- name: pinot-controller
image: apachepinot/pinot:1.2.0
env:
- name: JVM_OPTS
value: "-javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent.jar=8008:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml -Xms256M -Xmx1G"
ports:
- containerPort: 9000
- containerPort: 8008
command:
- "bin/pinot-admin.sh"
- "StartController"
- "-zkAddress"
- "zookeeper-0:2181"
- "-configFileName"
- "/config/controller.conf"
resources:
requests:
memory: "4Gi"
cpu: "4000m"
limits:
memory: "8Gi"
cpu: "8000m"
volumeMounts:
- mountPath: /data
name: pinot-controller-storage
- mountPath: /config
name: pinot-config
volumes:
- name: pinot-config
configMap:
name: pinot-controller-config
volumeClaimTemplates:
- metadata:
name: pinot-controller-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
storageClassName: "gp2"
---
apiVersion: v1
kind: Service
metadata:
name: pinot-controller
namespace: pinot
annotations:
<http://prometheus.io/scrape|prometheus.io/scrape>: "true"
<http://prometheus.io/port|prometheus.io/port>: "8008"
<http://prometheus.io/path|prometheus.io/path>: "/metrics"
spec:
ports:
- name: http
port: 9000
targetPort: 9000
- name: prometheus
port: 8008
targetPort: 8008
selector:
app: pinot-controller
From within the Pod : sh-5.2# curl <http://localhost:8008/metrics>
curl: (7) Failed to connect to localhost port 8008 after 0 ms: Couldn't connect to serverAdil Shaikh
08/31/2024, 2:06 PMSumitra Saksham
09/02/2024, 8:11 AMJaideep C
09/02/2024, 10:12 AMSumitra Saksham
09/02/2024, 2:24 PM{
"tableName": "RawData",
"tableType": "REALTIME",
"segmentsConfig": {
"timeColumnName": "event_time",
"schemaName": "RawData",
"replication": "2",
"retentionTimeUnit": "DAYS",
"retentionTimeValue": "180",
"minimizeDataMovement": true,
"segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
"replicasPerPartition": "2",
"completionMode": "DOWNLOAD",
"peerSegmentDownloadScheme": "http"
},
"tenants": {
"broker": "DefaultTenant",
"server": "DefaultTenant"
},
"routing": {
"instanceSelectorType": "strictReplicaGroup",
"segmentPrunerTypes": [
"time"
]
},
"query": {
"timeoutMs": 30000,
"disableGroovy": true,
"useApproximateFunction": true,
"maxQueryResponseSizeBytes": 104857600,
"maxServerResponseSizeBytes": 52428800
},
"tableIndexConfig": {
"loadMode": "MMAP",
"streamConfigs": {
"streamType": "kafka",
"stream.kafka.broker.list": "${KAFKA_BROKERS}",
"stream.kafka.consumer.prop.auto.offset.reset": "smallest",
"stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
"realtime.segment.flush.threshold.time": "24h",
"stream.kafka.topic.name": "raw-data",
"stream.kafka.consumer.type": "lowlevel",
"stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"realtime.segment.flush.threshold.rows": "100000",
"realtime.segment.flush.segment.size": "1GB",
"sasl.mechanism": "SCRAM-SHA-256",
"security.protocol": "SASL_PLAINTEXT",
"sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${SASL_USERNAME}\" password=\"${SASL_PASSWORD}\";"
},
"enableDefaultStarTree": true,
"invertedIndexColumns": [
"id",
"raw_val"
],
"rangeIndexColumns": [
"event_time"
],
"sortedColumn": [
"event_time"
],
"aggregateMetrics": false,
"nullHandlingEnabled": false,
"columnMajorSegmentBuilderEnabled": true,
"starTreeIndexConfigs": [
{
"dimensionsSplitOrder": [
"id",
"raw_val",
"event_time"
],
"skipStarNodeCreationForDimensions": [],
"functionColumnPairs": [
"COUNT__*"
],
"maxLeafRecords": 10000
}
]
},
"metadata": {
"customConfigs": {}
},
"ingestionConfig": {
"continueOnError": false,
"rowTimeValueCheck": false,
"segmentTimeValueCheck": true
},
"fieldConfigList": [
{
"name": "id",
"encodingType": "DICTIONARY"
},
{
"name": "raw_val",
"encodingType": "DICTIONARY"
}
]
}
I am using 3 Servers and 1 Controller. I am using GKS for deep store. Can you please help?Deepak Gautam
09/03/2024, 9:31 AM2024/09/03 09:23:24.816 INFO [VerifySegmentState] [main] Segment: d3_ob_metrics_1H__7__9__20240826T0915Z idealstate: {Server_pinot-server-19.pinot-server-headless.d3-ob-cluster-latest.svc.cluster.local_8098=ONLINE, Server_pinot-server-2.pinot-server-headless.d3-ob-cluster-latest.svc.cluster.local_8098=ONLINE} does NOT match external view: {Server_pinot-server-2.pinot-server-headless.d3-ob-cluster-latest.svc.cluster.local_8098=ONLINE}
Nick Just
09/03/2024, 9:53 AMBad Connection: Tableau could not connect to the data source.
Error Code: FAB9A2C5
org.apache.pinot.client.PinotClientException: Pinot returned HTTP status 400, expected 200
Pinot Version:
1.0.0
Jar files located in the Drivers folder as stated in the docs:
• pinot-jdbc-client-1.0.0-shaded.jar
• async-http-client-2.12.3.jar
• calcite-core-1.34.0.jar
I checked through curl command that the broker and server endpoint are working.