Raunak Binani
05/21/2025, 1:20 PMVipin Rohilla
05/22/2025, 6:36 AMcurl -v -u user:secret <http://stg-xxxxxxxxxx:9000/tables>
* Trying <http://10.xx.xx.xxx:9000|10.xx.xx.xxx:9000>...
* TCP_NODELAY set
* Connected to stg-xxxxxxxx (10.57.46.223) port 9000 (#0)
* Server auth using Basic with user 'user'
> GET /tables HTTP/1.1
> Host: stg-xxxxxxxx:9000
> Authorization: Basic dXNlcjpzZWNyZXQ=
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Pinot-Controller-Host: stg-xxxxxxx
< Pinot-Controller-Version: Unknown
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET, POST, PUT, OPTIONS, DELETE
< Access-Control-Allow-Headers: *
< Content-Type: application/json
< Content-Length: 135
<
* Connection #0 to host xxxxxxx left intact
{"tables":["cmsCaseActivity","cmsCaseComment","cmsCaseLifecycleLog","cmsCaseSnapshot","cmsInvestigationNote","payment_payment","test"]}
Starysn
05/22/2025, 6:37 AMJovan Vuković
05/22/2025, 6:09 PMJovan Vuković
05/22/2025, 6:29 PMdocker exec pinot-controller ./bin/pinot-admin.sh \
> LaunchDataIngestionJob \
> -jobSpecFile /config/orders/orders_job_spec.json
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/opt/pinot/lib/pinot-all-1.1.0-jar-with-dependencies.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2025/05/22 18:28:38.417 ERROR [LaunchDataIngestionJobCommand] [main] Got exception to generate IngestionJobSpec for data ingestion job -
org.yaml.snakeyaml.constructor.ConstructorException: Cannot create property=recordReaderSpec for JavaBean=org.apache.pinot.spi.ingestion.batch.spec.SegmentGenerationJobSpec@3f9270ed
in 'string', line 1, column 1:
{
^
Cannot create property=inputFormat for JavaBean=org.apache.pinot.spi.ingestion.batch.spec.RecordReaderSpec@40e60ece
in 'string', line 16, column 25:
"recordReaderSpec": {
^
Unable to find property 'inputFormat' on class: org.apache.pinot.spi.ingestion.batch.spec.RecordReaderSpec
in 'string', line 17, column 22:
"inputFormat": "JSON",
^
in 'string', line 16, column 25:
"recordReaderSpec": {
^
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:283) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:169) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:320) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:264) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:247) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:201) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:185) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:493) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:473) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.getSegmentGenerationJobSpec(IngestionJobLauncher.java:100) ~[pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:112) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.tools.Command.call(Command.java:33) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.tools.Command.call(Command.java:29) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine.executeUserObject(CommandLine.java:1953) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine.access$1300(CommandLine.java:145) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at picocli.CommandLine.execute(CommandLine.java:2078) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:171) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:202) [pinot-all-1.1.0-jar-with-dependencies.jar:1.1.0-c2606742bbc4b15cff857eb0ffe7ec878ff181bb]
Caused by: org.yaml.snakeyaml.constructor.ConstructorException: Cannot create property=inputFormat for JavaBean=org.apache.pinot.spi.ingestion.batch.spec.RecordReaderSpec@40e60ece
in 'string', line 16, column 25:
"recordReaderSpec": {
^
Unable to find property 'inputFormat' on class: org.apache.pinot.spi.ingestion.batch.spec.RecordReaderSpec
in 'string', line 17, column 22:
"inputFormat": "JSON",
^
전이섭
05/23/2025, 1:19 PMdf.write()
.format("pinot")
.option("controller", "localhost:9000")
.option("table", "transcript")
.mode(SaveMode.Append)
.save("/tmp/pinot-segments")
The segment files are created correctly in the /tmp/pinot-segments
directory, but they are not uploaded to the actual Pinot cluster.
Does the Spark Pinot connector not support writing directly to Pinot? It seems like it only creates the files locally.
Thanks.Jovan Vuković
05/23/2025, 4:26 PMdocker exec pinot-controller ./bin/pinot-admin.sh \
AddTable \
-tableConfigFile /config/orders/table.json \
-schemaFile /config/orders/schema.json \
-exec
docker exec pinot-controller ./bin/pinot-admin.sh \
AddTable \
-tableConfigFile /config/order_items_enriched/table.json \
-schemaFile /config/order_items_enriched/schema.json \
-exec
Here is the docker file
version: "3.8"
services:
mysql:
image: mysql/mysql-server:8.0.27
hostname: mysql
container_name: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=debezium
- MYSQL_USER=mysqluser
- MYSQL_PASSWORD=mysqlpw
volumes:
- ./mysql/mysql.cnf:/etc/mysql/conf.d
- ./mysql/mysql_bootstrap.sql:/docker-entrypoint-initdb.d/mysql_bootstrap.sql
- ./mysql/data:/var/lib/mysql-files/data
zookeeper:
image: confluentinc/cp-zookeeper:7.6.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
healthcheck: { test: echo srvr | nc localhost 2181 }
kafka:
image: confluentinc/cp-kafka:7.6.0
hostname: kafka
container_name: kafka
depends_on:
[ zookeeper ]
ports:
- "29092:29092"
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXTPLAINTEXT,PLAINTEXT HOSTPLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TOOLS_LOG4J_LOGLEVEL: ERROR
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
healthcheck: { test: nc -z localhost 9092, interval: 1s }
console:
hostname: console
container_name: console
image: docker.redpanda.com/redpandadata/console:latest
restart: on-failure
entrypoint: /bin/sh
command: -c "echo \"$$CONSOLE_CONFIG_FILE\" > /tmp/config.yml; /app/console"
environment:
CONFIG_FILEPATH: /tmp/config.yml
CONSOLE_CONFIG_FILE: |
server:
listenPort: 9080
kafka:
brokers: ["kafka:9092"]
schemaRegistry:
enabled: false
urls: ["http://schema-registry:8081"]
connect:
enabled: false
ports:
- "9080:9080"
depends_on:
- kafka
enrichment:
build: enrichment-kafka-streams
restart: unless-stopped
container_name: enrichment-kafka-streams
environment:
- QUARKUS_KAFKA_STREAMS_BOOTSTRAP_SERVERS=kafka:9092
- ORDERS_TOPIC=orders
- PRODUCTS_TOPIC=products
- ENRICHED_ORDERS_TOPIC=enriched-order-items
depends_on:
- kafka
pinot-controller:
image: apachepinot/pinot:1.1.0
command: "StartController -zkAddress zookeeper:2181"
container_name: "pinot-controller"
restart: unless-stopped
ports:
- "9000:9000"
depends_on:
- zookeeper
healthcheck:
test: [ "CMD-SHELL", "curl -f http://localhost:9000/health || exit 1" ]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
- ./pinot/config:/config
pinot-broker:
image: apachepinot/pinot:1.1.0
command: "StartBroker -zkAddress zookeeper:2181"
restart: unless-stopped
container_name: "pinot-broker"
ports:
- "8099:8099"
depends_on:
pinot-controller:
condition: service_healthy
healthcheck:
test: [ "CMD-SHELL", "curl -f http://localhost:8099/health || exit 1" ]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
pinot-server:
image: apachepinot/pinot:1.1.0
container_name: "pinot-server"
command: "StartServer -zkAddress zookeeper:2181"
restart: unless-stopped
depends_on:
pinot-broker:
condition: service_healthy
volumes:
- ./pinot/data:/var/pinot/data
dashboard-enriched:
build: streamlit
restart: unless-stopped
container_name: dashboard-enriched
ports:
- "8502:8501"
depends_on:
pinot-controller:
condition: service_healthy
volumes:
- ./streamlit/app_enriched.py:/workdir/app.py
environment:
- PINOT_SERVER
- PINOT_PORT
orders-service:
build: orders-service
restart: unless-stopped
container_name: orders-service
depends_on:
- mysql
- kafka
environment:
- MYSQL_SERVER=mysql
- KAFKA_BROKER_HOSTNAME=kafka
- KAFKA_BROKER_PORT=9092anmol
05/25/2025, 5:20 AMSTRING[]
), but I'm facing segment consumption errors.
Here’s what I’ve done so far:
• Pinot version: 1.0+
• In Kafka, the message contains a field like:
"txn_notification_event_list": "[\"ZORBLAT\",\"QUIXANO\",\"FLUMPTION\",\"WIBBLEX\",\"SNOOFLE-VORTEX\",\"PLINKO-FUND-SWAP\",\"ZOOGLE\",\"FX-TRANSMOG\"]"
I tried the following transformConfig
in the table config:
Config 1 :
{
"columnName": "txn_notification_event_list_array",
"transformFunction": "jsonExtractArray(txn_notification_event_list, '$', 'STRING')"
}
Config 2 :
"transformConfigs": [
{
"columnName": "txn_notification_event_list_array",
"transformFunction": "jsonFormatArray(JSONPARSE(txn_notification_event_list))"
}
]
And defined this in the schema:
{
"name": "txn_notification_event_list_array",
"dataType": "STRING",
"singleValueField": false
}
• The segment ends up in ERROR state on both servers. No segmentSize
, no consumerInfo
, and no errorInfo
shown in the Pinot UI.
I've verified that the Kafka messages are correctly formatted as stringified arrays, but I’m not sure if Pinot is parsing this properly or if something’s misconfigured in my schema/table setup.
Would appreciate any help or pointers!
Thanks!Yeshwanth
05/26/2025, 11:36 AMMonika reddy
05/26/2025, 4:21 PMGeorgi Varbanov
05/27/2025, 2:09 PMmathew
05/28/2025, 10:58 AMFatlind Hoxha
05/28/2025, 1:49 PMLuis P Fernandes
05/30/2025, 10:13 AM{
"serversFailingToRespond": 0,
"serversUnparsableRespond": 0,
"_segmentToConsumingInfoMap": {
"ericsson_ran_enodebfunction__0__16__20250530T0357Z": [
{
"serverName": "Server_100.100.25.106_8098",
"consumerState": "CONSUMING",
"lastConsumedTimestamp": 1748580550862,
"partitionToOffsetMap": {
"0": "465530"
},
"partitionOffsetInfo": {
"currentOffsetsMap": {
"0": "465530"
},
"latestUpstreamOffsetMap": {
"0": "528919"
},
"recordsLagMap": {
"0": "63389"
},
"availabilityLagMsMap": {
"0": "8"
}
}
}
]
}
}
Table Debug: [
{
"tableName": "ericsson_ran_enodebfunction_REALTIME",
"numSegments": 5,
"numServers": 4,
"numBrokers": 2,
"segmentDebugInfos": [],
"serverDebugInfos": [],
"brokerDebugInfos": [
{
"brokerName": "Broker_100.100.123.30_8099",
"idealState": "ONLINE",
"externalView": "ONLINE"
},
{
"brokerName": "Broker_100.100.57.158_8099",
"idealState": "ONLINE",
"externalView": "ONLINE"
}
],
"ingestionStatus": {
"ingestionState": "HEALTHY",
"errorMessage": ""
},
"tableSize": {
"reportedSize": "9 MB",
"estimatedSize": "9 MB"
}
}
]Rajat
05/30/2025, 10:39 AMConsumed message: key = 844881521, value = {"bch_name": "SHOPIFY", "bch_code": "SH", "ar_awb_id": null, "ar_zone": null, "ch_id": 2993000, "ch_name": "Shopify", "ch_company_id": 18682, "ch_base_channel_code": "SH", "am_awb_code": null, "am_ofd1": null, "am_picked_up_date": null, "o_id": 848523000, "o_company_id": 18682, "o_channel_id": 2993000, "o_shipping_method": "SR", "o_sla": 48, "o_customer_city": "Fatehpur", "o_customer_state": "Uttar Pradesh", "o_customer_pincode": "212655", "o_payment_method": "cod", "o_net_total": "\u0002¾¼", "o_total": "\u0002Ñà", "o_created_at": "2025-05-29 10:22:22", "co_id": null, "co_mode": null, "a_id": null, "a_awb_code": null, "a_cod": null, "a_shipment_id": null, "a_applied_weight_amount": null, "a_charge_weight_amount": null, "s_id": 844881521, "s_order_id": 848523000, "s_company_id": 18682, "s_courier": null, "s_sr_courier_id": null, "s_awb": null, "s_awb_assign_date": null, "s_total": "\u0002Ñà", "s_status": 11, "s_shipped_date": null, "s_delivered_date": null, "s_rto_initiated_date": null, "s_rto_delivered_date": null, "s_created_at": "2025-05-29 10:22:33", "s_updated_at": "2025-05-29 10:22:35", "s_pickup_scheduled_date": null, "s_etd": null, "op": "u", "awbs_source": "Unknown", "couriers_source": "Unknown", "ts_ms_source_kafka": 1748494354367, "ts_ms_merged_kafka": 1748494355210}, partition = 3, offset = 15113790, timestamp = 2025-05-29T04:52:35.213Z
[4:03 PM] Kavya Ramaiah
Consumed message: key = 844881521, value = {"bch_name": null, "bch_code": null, "ar_awb_id": null, "ar_zone": null, "ch_id": null, "ch_name": null, "ch_company_id": null, "ch_base_channel_code": null, "am_awb_code": null, "am_ofd1": null, "am_picked_up_date": null, "o_id": null, "o_company_id": null, "o_channel_id": null, "o_shipping_method": null, "o_sla": null, "o_customer_city": null, "o_customer_state": null, "o_customer_pincode": null, "o_payment_method": null, "o_net_total": null, "o_total": null, "o_created_at": null, "co_id": null, "co_mode": null, "a_id": null, "a_awb_code": null, "a_cod": null, "a_shipment_id": null, "a_applied_weight_amount": null, "a_charge_weight_amount": null, "s_id": 844881521, "s_order_id": null, "s_company_id": null, "s_courier": null, "s_sr_courier_id": null, "s_awb": null, "s_awb_assign_date": null, "s_total": null, "s_status": null, "s_shipped_date": null, "s_delivered_date": null, "s_rto_initiated_date": null, "s_rto_delivered_date": null, "s_created_at": "2025-05-29 10:22:33", "s_updated_at": null, "s_pickup_scheduled_date": null, "s_etd": null, "op": "d", "awbs_source": null, "couriers_source": null, "ts_ms_source_kafka": 1748494359974, "ts_ms_merged_kafka": null}, partition = 3, offset = 15114045, timestamp = 2025-05-29T04:52:41.778Z
Ideally in pinot it should Delete them as I am using this tableConfig:
{
"tableName": "shipmentMerged_final",
"tableType": "REALTIME",
"segmentsConfig": {
"timeColumnName": "s_created_at",
"timeType": "DAYS",
"replication": "2",
"retentionTimeUnit": "DAYS",
"retentionTimeValue": "3",
"minimizeDataMovement": false
},
"tableIndexConfig": {
"loadMode": "MMAP",
"nullHandlingEnabled": true,
"createInvertedIndexDuringSegmentGeneration": true,
"invertedIndexColumns": [
"o_customer_city",
"o_customer_pincode",
"o_customer_state",
"s_company_id",
"s_courier",
"o_shipping_method",
"o_payment_method",
"s_status",
"s_sr_courier_id"
],
"noDictionaryColumns": [
"s_etd",
"s_shipped_date",
"a_awb_code",
"s_order_id",
"s_id",
"a_id",
"o_id",
"a_shipment_id",
"s_awb_assign_date",
"s_delivered_date",
"s_awb",
"s_rto_initiated_date",
"s_pickup_scheduled_date",
"a_applied_weight_amount_double",
"o_total_double",
"o_created_at",
"s_updated_at",
"o_net_total_double",
"a_charge_weight_amount_double",
"s_rto_delivered_date",
"ar_awb_id",
"am_picked_up_date",
"am_ofd1",
"s_total_double",
"ts_ms_merged_kafka"
],
"bloomFilterColumns": [
"s_company_id"
],
"sortedColumn": [
"s_company_id"
],
"varLengthDictionaryColumns": [
"o_customer_state",
"ar_zone",
"s_courier",
"o_customer_pincode",
"o_payment_method",
"o_shipping_method",
"o_customer_city"
]
},
"ingestionConfig": {
"streamIngestionConfig": {
"streamConfigMaps": [
{
"streamType": "kafka",
"stream.kafka.consumer.type": "lowlevel",
"stream.kafka.decoder.prop.format": "AVRO",
"stream.kafka.consumer.group.id": "shipmentMerged-consumer-group",
"stream.kafka.decoder.prop.schema.registry.rest.url": "<http://internal-adfbe53cf874c419b80ef29810ee56b7-1168949678.ap-south-1.elb.amazonaws.com:8081>",
"stream.kafka.topic.name": "pinot_d0_d2_realtime",
"stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
"stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder",
"stream.kafka.broker.list": "<http://internal-a01a7420dce764739aecf132fdd316d8-1810051101.ap-south-1.elb.amazonaws.com:9094|internal-a01a7420dce764739aecf132fdd316d8-1810051101.ap-south-1.elb.amazonaws.com:9094>",
"stream.kafka.schema.registry.url": "<http://internal-adfbe53cf874c419b80ef29810ee56b7-1168949678.ap-south-1.elb.amazonaws.com:8081>",
"realtime.segment.flush.threshold.time": "24h",
"realtime.segment.flush.threshold.segment.size": "150M",
"stream.kafka.consumer.prop.auto.offset.reset": "smallest"
}
]
},
"transformConfigs": [
{
"columnName": "ingestion_ts",
"transformFunction": "now()"
},
{
"columnName": "is_deleted",
"transformFunction": "compareFields(op, 'd')"
},
{
"columnName": "s_total_double",
"transformFunction": "bytesToDouble(o_net_total, 10, 2)"
},
{
"columnName": "o_net_total_double",
"transformFunction": "bytesToDouble(o_net_total, 10, 2)"
},
{
"columnName": "o_total_double",
"transformFunction": "bytesToDouble(o_total, 10, 2)"
},
{
"columnName": "a_applied_weight_amount_double",
"transformFunction": "bytesToDouble(a_applied_weight_amount, 10, 2)"
},
{
"columnName": "a_charge_weight_amount_double",
"transformFunction": "bytesToDouble(a_charge_weight_amount, 10, 2)"
}
]
},
"routing": {
"instanceSelectorType": "strictReplicaGroup"
},
"upsertConfig": {
"mode": "FULL",
"consistencyMode": "SYNC",
"comparisonColumns": [
"ts_ms_source_kafka"
],
"deleteRecordColumn": "is_deleted",
"dropOutOfOrderRecord": true
},
"tenants": {},
"metadata": {
"customConfigs": {}
}
}
In this I am using Upsert's delete record config to delete the records once is_deleted is true and that is true is op=d
Why this happened can anyone give any suggestion of loophole???Rajat
05/30/2025, 10:42 AMAG
06/02/2025, 5:36 AMhelm repo add pinot <https://raw.githubusercontent.com/apache/pinot/master/helm>
kubectl create ns pinot-quickstart
helm install pinot pinot/pinot \
-n pinot-quickstart \
--set cluster.name=pinot \
--set server.replicaCount=2
but
<https://raw.githubusercontent.com/apache/pinot/master/helm>
returns 404, where is the right path?Vipin Rohilla
06/02/2025, 1:24 PM2025/06/02 18:28:55.712 INFO [HelixTaskExecutor] [ZkClient-EventThread-119-xxxxxxxxx:2181,xxxxxxxx:2181,xxxxxxx:2181] Submit task: d81d401a-b902-4ca4-bcf9-ade6675e0e44 to pool: java.util.concurrent.ThreadPoolExecutor@1339f332[Running, pool size = 40, active threads = 40, queued tasks = 1373, completed tasks = 10622]
Georgi Varbanov
06/03/2025, 7:27 AMAG
06/03/2025, 8:45 AMtableIndexConfig?
Georgi Varbanov
06/03/2025, 10:17 AMDong Zhou
06/03/2025, 11:19 PMGeorgi Varbanov
06/04/2025, 7:29 AMprasanna
06/04/2025, 8:15 AMcoco
06/05/2025, 6:34 AMRajat
06/05/2025, 6:45 AMGaurav
06/06/2025, 9:31 AMGeorgi Varbanov
06/06/2025, 9:45 AMVipin Rohilla
06/09/2025, 5:14 PMEddie Simeon
06/09/2025, 7:06 PM