Doaa Deeb
03/04/2025, 1:20 AMdruid.storage.type
to s3
?Maytas Monsereenusorn
03/04/2025, 6:10 PMKevin C.S
03/07/2025, 4:32 AMАлексей Ясинский
03/11/2025, 10:56 AMHagen Rother
03/12/2025, 11:11 AMsegmentGranularity
parameter? Since the meta table already has start
and end
as timestamp, such segments should just work ™️ but I don't see how I could create one.Maytas Monsereenusorn
03/13/2025, 8:40 PMSam
03/14/2025, 3:47 PMattributes
, and it is a JSON
type. The JSON value has three keys have an identical long string, for instance
{
"key1": "same long string",
"key2": "same long string",
"key3": "same long string"
}
Would Druid only store the same long string
once instead of thrice in the data store to save space? I assumption to this is yes, as Druid should use bitmap index to convert the string into bit, https://druid.apache.org/docs/latest/design/segments#segment-file-structure. But I am not sure it is also true for nested column.Corwin Lester
03/18/2025, 3:08 PMAndrea Licata
03/21/2025, 3:03 PMSivakumar Karthikesan
03/23/2025, 9:25 AMselect tenantId, systemId, TIMESTAMP_TO_MILLIS(__time) as "timestamp", sum(iops_pref_pct) as iops_pref_pct from (select DISTINCT(__time),* from "xyzdatasource" where systemId='aaajjjjccccc' and __time >= MILLIS_TO_TIMESTAMP(1742252400000) and __time <= MILLIS_TO_TIMESTAMP(1742338800000)) group by __time, tenantId,systemId order by __time asc
Utkarsh Chaturvedi
03/24/2025, 8:44 AMjose abadi
04/01/2025, 3:21 AMAbdullah Ömer Yamaç
04/02/2025, 7:13 PM{
"queryType": "scan",
"dataSource": "mobility",
"intervals": [
"2024-01-01T00:00:00.000Z/2025-01-01T00:00:00.000Z"
],
"columns": [
"advertiserid"
],
"filter": {
"type": "selector",
"dimension": "advertiserid",
"value": "0104c0fe-b9b0-6e03-1b7f-f186d7f16b3e"
}
}
Is it normal to take this much time?HEPBO3AH
04/03/2025, 9:55 PMAR
04/06/2025, 3:00 PMdruid.lookup.namespace.numBufferedEntries
apply per lookup in the "cachedNamespace" - meaning would it create a buffer with 100K entries for each lookup?
Single Cached Lookup
There isn't much info on how to use this. Would the Lookup APIs work for this type of lookup as well?
Finally, the max heap size for the historical process is specified as 24GB in the documentation. Is this a hard limit? As we plan to have some large lookups, can we set the max heap size > 24GB?
Thanks,
AR.akshat
04/07/2025, 4:30 AMAR
04/08/2025, 4:45 AMPHP Dev
04/09/2025, 12:33 PMSivakumar Karthikesan
04/10/2025, 7:10 PMMaster Chatchai
04/15/2025, 6:52 PMahmed grati
04/23/2025, 4:45 PMtoolchestMergeBuffersHolders
and mergingQueryRunnerMergeBuffersHolders
?Pooja Shrivastava
04/24/2025, 2:41 AMPooja Shrivastava
04/24/2025, 2:44 AM## Druid Emitting Metrics. ref: <https://druid.apache.org/docs/latest/configuration/index.html#emitting-metrics>
druid_emitter: http
#druid_emitter_composing_emitters: '["prometheus","kafka"]'
#druid_monitoring_emissionPeriod: PT1M
#druid_emitter_prometheus_strategy: "exporter"
#druid_emitter_prometheus_port: "9200"
#druid_emitter_logging_logLevel: debug
druid_emitter_http_recipientBaseUrl: <http://iptv-druid-exporter.prd.adl.internal/metrics>
#druid_emitter_http_recipientBaseUrl: <http://druid_exporter_url>:druid_exporter_port/druid
#kafka-emitter config
druid_emitter_kafka_bootstrap_servers: "10.X.X.X:9092,10.X.X.X:9092,10.X.X.X:9092"
druid_kafka_security_protocol: "SASL_PLAINTEXT" # Use "SASL_SSL" if you also need TLS
druid_emitter_kafka_metric_topic: druid-metric
druid_emitter_kafka_alert_topic: druid-alert
druid_emitter_kafka_request_topic: druid-query
druid_emitter_kafka_clusterName: prd-druid
# SASL configuration
druid_emitter_kafka_sasl_mechanism: "PLAIN"
druid_emitter_kafka_sasl_jaas_config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"appuser\" password=\"uJ5551Ax\";"
#druid_emitter_kafka_sasl_jaas_config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"8I444444z\";"
druid_request_logging_setMDC: "true"
druid_request_logging_setContextMDC: "true"
druid_request_logging_nativeQueryLogger: "true"
druid_request_logging_sqlQueryLogger: "true"
#changing logging Details
#DRUID_LOG_DIR: /var/log/
org_apache_druid_jetty_RequestLog: DEBUG
druid_startup_logging_logProperties: "true"
druid_request_logging_type: emitter #slf4j
druid_request_logging_feed: feed
#druid_request_logging_type: file #slf4j
druid_request_logging_dir: /opt/druid/log/request/
druid_request_logging_durationToRetain: P2D
druid_request_logging_filePattern: "yyyy-MM-dd'.log'"
Pooja Shrivastava
04/24/2025, 2:45 AMPooja Shrivastava
04/24/2025, 3:03 AMUdit Sharma
04/30/2025, 4:34 AMAR
04/30/2025, 1:20 PM/druid/v2/sql/task
.
Sometimes we see a "504 Gateway Timeout" exception. Other times we see a "Task [] already exists" exception.
We can see the TimeoutException in the router logs as well. But we are unable to see any issue in any of the other services that would give a pointer to why this is happening.
Can someone suggest what could be the issue?
Druid version: 27.0.0
Thanks,
AR.Cristina Munteanu
04/30/2025, 7:57 PMJRob
05/01/2025, 6:46 PMNick M
05/07/2025, 12:01 PM