https://linen.dev logo
Join Slack
Powered by
# troubleshooting
  • k

    Keith Byrd

    03/03/2025, 6:01 PM
    set the channel topic: Druid ingestion not working.
  • s

    Swagat

    03/05/2025, 1:40 PM
    Hello!! Do we have support for Delta Lake tables on gcs? or is it only for aws?
    b
    • 2
    • 2
  • m

    Mohit Dhingra

    03/06/2025, 8:44 AM
    Hi Team, I have configured the Prometheus emitter in Druid. All metrics are working fine except for
    taskslot
    . I can see in the logs that the metrics monitors are loading, but the metrics are not appearing. Can someone suggest a solution?
    Copy code
    kubectl logs druid-overlord-2 -c druid-overlord | grep org.apache.druid.server.metrics
    {"instant":{"epochSecond":1739883011,"nanoOfSecond":51244171},"thread":"main","level":"DEBUG","loggerName":"org.apache.druid.guice.JsonConfigurator","message":"Loaded class[class org.apache.druid.server.metrics.MonitorsConfig] from props[druid.monitoring.] as [MonitorsConfig{monitors=[class org.apache.druid.java.util.metrics.JvmMonitor, class org.apache.druid.server.metrics.TaskCountStatsMonitor, class org.apache.druid.server.metrics.TaskSlotCountStatsMonitor]}]","endOfBatch":false,"loggerFqcn":"org.apache.logging.slf4j.Log4jLogger","contextMap":{},"threadId":1,"threadPriority":5}
    {"instant":{"epochSecond":1739883012,"nanoOfSecond":60148915},"thread":"main","level":"INFO","loggerName":"org.apache.druid.server.metrics.MetricsModule","message":"Loaded 5 monitors: org.apache.druid.java.util.metrics.JvmMonitor, org.apache.druid.server.metrics.TaskCountStatsMonitor, org.apache.druid.server.metrics.TaskSlotCountStatsMonitor, org.apache.druid.curator.DruidConnectionStateListener, org.apache.druid.server.initialization.jetty.JettyServerModule$JettyMonitor","endOfBatch":false,"loggerFqcn":"org.apache.logging.slf4j.Log4jLogger","contextMap":{},"threadId":1,"threadPriority":5}
  • s

    Stefanos Pliakos

    03/06/2025, 12:18 PM
    Hi team, can I ask a question concerning lookups? We have a new cachedNanespace lookup, with keys something like “*1382d5fe-5f52-42ea-914c-96a348b319c6*” (more or less fixed size) and value a string like *“AERE,GRTR,RTRT,…..,FGKJW” (*the value could be up to 165 chars long, comma separated list of brands (one brand for example is AERE). The lookup doesn’t appear to get loaded though, we have 300000 rows but only 3 are loaded. When we put the lookup as static map the lookup is loaded correctly. What could be the issue with the cachedNamespace lookups? I don’t see any error in the logs
  • d

    Dinesh

    03/13/2025, 5:52 AM
    Hi Everyone Recently we got into a problem, for a particular datasource the segments loading/dropping is stuck on the other side sometimes the queries are working fine and sometimes not though data is fully available. Below is the error it is showing in coordinator logs 2025-03-13T054617,198 ERROR [Master-PeonExec--0] org.apache.druid.server.coordinator.loading.HttpLoadQueuePeon - Server[http://10.254.14.11:8083] failed segment[DATASOURCE_2025-03-09T000000.000Z_2025-03-10T000000.000Z_2025-03-13T054100.054Z] request[LOAD] with cause [org.apache.druid.java.util.common.ISE: byte size 580,706 exceeds 524,288].
    k
    • 2
    • 13
  • y

    ymcao

    03/17/2025, 8:22 AM
    Hi team, we have this problem: Druid Coordinator stop assign segments after the zk restart https://github.com/apache/druid/issues/17807 1. Zookeeper restart due to the node crash in 05:50 UTC 2. Coordinator log a.
    2025-03-17T05:50:23,080 INFO [LeaderSelector[/druid/coordinator/_COORDINATOR]] org.apache.druid.server.coordinator.DruidCoordinator - I am no longer the leader... 2025-03-17T05:50:24,370 INFO [LeaderSelector[/druid/coordinator/_COORDINATOR]] org.apache.druid.server.coordinator.DruidCoordinator - I am the leader of the coordinators, all must bow! Starting coordination in [PT30S].
    3. There are no new segment assignments, but the following log is present with no error logs: a.
    Polled and found 201 rule(s) for 193 datasource(s).
    I have attempted the following approaches to recover from the issue: 1. Restarted the coordinator leader and follower, but this did not help. 2. Restarted the Zookeeper follower, but this did not help. 3. Restarted the Zookeeper leader, which resolved the issue. Does anyone have a similar experience? thanks and look forward to your help 🙏
    k
    j
    • 3
    • 3
  • z

    Zeyu Chen

    03/18/2025, 1:54 AM
    Zombie JDBC threads in broker? Hi folks, I am debugging some JDBC queries on our druid-broker and came across some unexpected jstack traces. I need some help interpreting the jstack output seen. Thanks. Based on limited reading of avatica server side query execution code, for every JDBC query in the phase of
    DruidMeta.execute/fetch
    , we should expect to see 2 threads in the broker JVM: • a
    JDBCQueryExecutor-connection-XXX
    thread running the query • a qtp jetty thread waiting on a future from the JDBC thread From time to time, I see a long running (9+minutes) JDBC thread without a corresponding qtp jetty thread. The JDBC thread would have a minimal waiting stack like the following:
    Copy code
    "JDBCQueryExecutor-connection-2ee5051c-5420-426b-b9b1-2c9b4e548b83-statement-1" #34085356 daemon prio=5 os_prio=0 tid=0x00007f9bf40da800 nid=0x24cb52 waiting on condition [0x00007f994b9fe000]
       java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00007fafa0189278> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2044)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
    What could be the cause of this? Is this normal?
  • p

    PHP Dev

    03/18/2025, 9:46 PM
    Hi All, Have a problem with native queries after upgrade to
    32.0.0
    It seems that filtered
    doubleSum
    aggregator now returns
    NULL
    instead of
    0
    in case of no rows for such filter. Because of
    SQL compliant mode
    . For example
    Copy code
    {
      "type": "filtered",
      "name": "FilteredAggregator",
      "filter": {
        "type": "selector",
        "dimension": "event_id",
        "value": "aaaaaa"
      },
      "aggregator": {
        "name": "FilteredAggregator",
        "type": "doubleSum",
        "fieldName": "event_value"
      }
    }
    And it seems that option
    druid.generic.useDefaultValueForNull
    that could help in previous versions no longer available. For
    SQL
    queries we can use
    NVL
    . But how can I fix it for
    native
    ? Please help
    b
    m
    • 3
    • 4
  • m

    Mahesha Subrahamanya

    03/19/2025, 4:16 PM
    Hello Team, do you have any more information about these error codes? We were running into an error (WorkerRpcFailedA:) with SQL ingestion, but in the Druid link, not much information. Please let me know if you have come across in your usecase. Thanks msq engine produce this error code - https://druid.apache.org/docs/latest/multi-stage-query/reference/#error-codes WorkerRpcFailedA: remote procedure call to a worker task failed and could not recover. workerTaskId: the id of the worker task
    k
    g
    • 3
    • 9
  • p

    PHP Dev

    03/20/2025, 2:14 PM
    Hi All, I see that autocompaction mechanism has been reworked in 31.0.0 and now we can have autocompaction task superviosrs. 1. Can I have more than one autocompact task supervisors for one dataSource? For example I need to have query granularity PT1H and segment granularity P1W for last month, query granularity P1D and segment granularity P1W for data older than 1 month but not older than previous year , query granularity PT1D and segment granularity P1Y for and less dimentions for data older than prev year. 2. I have autocompaction configured through web console but can't see related supervisors. Will it work in parallel with the autocompact supervisor that I will submit for the same datasource?
    g
    • 2
    • 2
  • n

    Noor

    03/21/2025, 6:47 AM
    https://apachedruidworkspace.slack.com/archives/C04Q0047B4M/p1737543116780219
    g
    • 2
    • 1
  • s

    Sivakumar Karthikesan

    03/23/2025, 9:26 AM
    Copy code
    Team .,, in our of the prod cluster where we are seeing latency issue , it takes 4 to 5s to get the result. other datasource works fine and doesnt have any latency.  any suggestion please
    
    select tenantId, systemId, TIMESTAMP_TO_MILLIS(__time) as "timestamp",  sum(iops_pref_pct) as iops_pref_pct  from (select DISTINCT(__time),* from "xyzdatasource" where  systemId='aaajjjjccccc' and __time >= MILLIS_TO_TIMESTAMP(1742252400000) and  __time <= MILLIS_TO_TIMESTAMP(1742338800000)) group by __time, tenantId,systemId order by __time asc
    j
    • 2
    • 1
  • u

    Utkarsh Chaturvedi

    03/24/2025, 8:44 AM
    Hi team. We're trying to test Druid hot and cold tiering setup with 2 historical hot nodes and 1 cold node. We're using the same broker for all the tiers. Currently we're seeing a strange behaviour that only 1 historical hot node is responding with queries. This seems to be incorrect as both the historical hot tiers have data to provide. Can anyone help out with this please.
    g
    • 2
    • 4
  • j

    Julian Reyes

    03/25/2025, 1:22 PM
    I got a question, what does one need to do if we change the kinesis stream druid consumes off? e.g. druid consumes from kinesis stream in account A and it will now consume from account B (both streams have same name). However I see there are issues as it thinks it should keep consumen from previous
    g
    • 2
    • 4
  • s

    strikers

    03/27/2025, 6:27 AM
    Hi guys, I want the Druid service to use Flashblade Pure (S3) as storge, but I get the error as follows.
    Caused by: java.lang.RuntimeException: java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: A header you provided implies functionality that is not implemented. (Service: Amazon S3; Status Code: 501; Error Code: NotImplemented; Request ID: null; S3 Extended Request ID: null; Proxy: null), S3 Extended Request ID: null
    When Pure Stroge is used as Storge, it can read segments from S3 (get) and save them to the local directory, but it cannot write segments to Pure S3 (put). When I give MinIO (S3) instead of Pure Storge without changing the configs, Druid can write logs and segments into the bucket. But with the same configs, it cannot write segments and logs into Pure Storge. S3 Configs:
    Copy code
    druid_storage_type: s3
    druid_storage_baseKey: warehouse
    druid_storage_bucket: druid
    druid_storage_storageDirectory: <s3a://druid/warehouse/>
    
    druid_indexer_logs_type: s3
    druid_indexer_logs_directory: <s3a://druid/logs/>
    druid_indexer_logs_s3Bucket: druid
    druid_indexer_logs_s3Prefix: logs
    
    druid_storage_useS3aSchema: "true"
    
    druid_s3_disableChunkedEncoding: "true"
    
    druid_s3_accessKey: "xxxx"
    druid_s3_secretKey: "yyyy"
    druid_s3_protocol: http
    druid_s3_enablePathStyleAccess: "true"
    druid_s3_endpoint_signingRegion: us-east-1
    druid_s3_endpoint_url: <http://zzz.com>
    
    druid_s3_forceGlobalBucketAccessEnabled: "true"
    Can you help me how to write Druid data to Pure Storage (S3) using the same S3 protocol as MinIO?
    g
    • 2
    • 1
  • u

    Utkarsh Chaturvedi

    04/01/2025, 6:24 AM
    Hi guys, Our druid cluster is hosted on K8s using helm. Restarting Druid historical takes alot of time and hence we have changed the liveness and readiness probe initial delays to be significantly high. But since they are quite high even after all segments are loaded it does not go into ready state preventing the next historical pod going up. Is there anyway in which the inital probes can be delayed till the point segments are loaded and then it can be marked as ready>
    k
    • 2
    • 6
  • k

    Krishna

    04/02/2025, 4:16 AM
    Hi , we are running in to too many open files while running group by v2 queries . I have seen this PR opened for long time https://github.com/apache/druid/issues/11558 . Can some one provide workaround to mitigate this issue
    Copy code
    QueryInterruptedException{msg=java.lang.RuntimeException: java.io.FileNotFoundException: /tmp/druid-groupBy-ef4e7e51-ea6b-48be-8d40-08fd92fb64c6_f78e666b-0d64-4c01-8b21-5f16487d58bd/00271801.tmp (Too many open files) Apache druid
    g
    b
    • 3
    • 4
  • v

    Vincent Lao

    04/02/2025, 9:40 AM
    Hi guys, I'm experimenting with the auto kill segments feature in Druid v30 (docker image =
    apache/druid:30.0.1
    ). But it doesn't seem to be working for me: 1. I have uploaded duplicated data (the wikipedia sample data), where only the latest one has
    active = true
    2. Enabled the auto kill segment feature by manually adding the following variables in the coordinator’s runtime.properties and restarted the container
    Copy code
    -- Directly Adding into runtime.properties & restart container
    echo "druid.coordinator.kill.on=true" >> /opt/druid/conf/druid/cluster/master/coordinator-overlord/runtime.properties
    echo "druid.coordinator.killAllDataSources=true" >> /opt/druid/conf/druid/cluster/master/coordinator-overlord/runtime.
    properties
    echo "druid.coordinator.kill.ignoreDurationToRetain=true" >> /opt/druid/conf/druid/cluster/master/coordinator-overlord
    /runtime.properties
    3. Check log for kill tasks
    Copy code
    -- LOG to verify autokill config
    2025-03-31 12:16:13 2025-03-31T11:16:13,396 INFO [main] org.apache.druid.cli.CliCoordinator - * druid.coordinator.kill.bufferPeriod: PT0S
    2025-03-31 12:16:13 2025-03-31T11:16:13,396 INFO [main] org.apache.druid.cli.CliCoordinator - * druid.coordinator.kill.ignoreDurationToRetain: true
    2025-03-31 12:16:13 2025-03-31T11:16:13,396 INFO [main] org.apache.druid.cli.CliCoordinator - * druid.coordinator.kill.on: true
    2025-03-31 12:16:13 2025-03-31T11:16:13,397 INFO [main] org.apache.druid.cli.CliCoordinator - * druid.coordinator.killAllDataSources: true
    
    -- LOG to verify Kill task is scheduled
    2025-03-31 12:16:20 2025-03-31T11:16:20,099 INFO [LeaderSelector[/druid/coordinator/_COORDINATOR]] org.apache.druid.server.coordinator.duty.KillUnusedSegments - druid.coordinator.kill.durationToRetain[PT7776000S] will be ignored when discovering segments to kill because druid.coordinator.kill.ignoreDurationToRetain is set to true.
    2025-03-31 12:16:20 2025-03-31T11:16:20,100 INFO [LeaderSelector[/druid/coordinator/_COORDINATOR]] org.apache.druid.server.coordinator.duty.KillUnusedSegments - Kill task scheduling enabled with period[PT1800S], durationToRetain[IGNORING], bufferPeriod[PT0S], maxSegmentsToKill[100]
    4. It appears Druid is clearing the metadata (almost instantly), as it shows “No Segments to load/ drop”. While sys.segment also only showing the latest uploaded segment now 5. But files are not removed from deepstorage (currently at local directory) 6. However, manually triggering the kill task from the UI removes the files successfully. And I can see a kill task in the “Tasks” tab
    b
    • 2
    • 2
  • v

    Vincent Lao

    04/02/2025, 9:45 AM
    Screenshot 2025-04-02 at 10.43.27.png
  • s

    Subin C Mohan

    04/07/2025, 12:23 PM
    Hi all, I am currently trying to upgrade from druid 26 to 27. I am using druid operator in my kubernetes cluster. In the version 26 injestion task are working properly without any issues when I upgraded it is failing and when I compared the logs of the injestion job pod for 26 as well as 27 I am able to see a warning: 2025-04-07T050603,650 WARN [PeriodicMetricReader-1] oshi.software.os.linux.LinuxOperatingSystem - Did not find udev library in operating system. Some features may not work. 2025-04-07T050603,687 WARN [PeriodicMetricReader-1] oshi.hardware.platform.linux.LinuxHWDiskStore - Disk Store information requires libudev, which is not present. I just referred the release notes and I was able to see that sysmonitor is deprecated from 27 version in github https://github.com/apache/druid/issues/14761 Based on this I tried several ways to fix this tried to add monitors and extensions in my config none of these worked. Can anyone please help?
    g
    • 2
    • 4
  • u

    이세찬

    04/09/2025, 8:12 AM
    Hi. I'm trying to deploy Druid's historical service, but it's not deploying because of an error like java.io.IOException: Packet len ​​1126198 is out of range! So I tried to add a jvm option like -Djute.maxbuffer=1048575 to prevent it from exceeding the range, but this option doesn't seem to work either. What else should I look at? For reference, Druid's version is 31.0.0.
    b
    • 2
    • 7
  • d

    Dinesh

    04/10/2025, 6:53 AM
    Hi All We are running aggregation for one datasource having some of metric columns and below dimensions. if we are aggregating keeping only VARCHAR dimensions then aggregation works fine, but if we add any double/BIGINT dimension with VARCHAR it's giving wrong result. What could go wrong if someone has any idea ? +---------------------+------------+ | COLUMN_NAME | DATA_TYPE | +---------------------+------------+ | __time | TIMESTAMP | | binMax | DOUBLE | | binMin | DOUBLE | | distName | VARCHAR | | extendedDistName | VARCHAR | | index | BIGINT | | measurement | VARCHAR | +---------------------+------------+
    g
    • 2
    • 2
  • d

    Dinesh

    04/11/2025, 12:33 PM
    Hello We are shifting to middle manager less druid , current version of druid using : 31.0.1 To achieve we deployed druid with below configs • added extension in extension list:
    "druid-kubernetes-overlord-extensions"
    • configs in overlord
    Copy code
    config:
        druid_indexer_runner_namespace: <namespace>
        druid_indexer_queue_maxSize: 10
        druid_processing_intermediaryData_storage_type: deepstore
        #druid_indexer_runner_capacity: 2147483647
        druid_indexer_runner_type: k8s
        druid_indexer_task_encapsulatedTask: true
        druid_peon_mode: remote
        druid_service: druid/peon
    
        druid_indexer_runner_k8s_adapter_type: overlordSingleContainer
        druid_indexer_runner_javaOptsArray: '["-server", "-Xms1g", "-Xmx2g", "-XX:MaxDirectMemorySize=5g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]'
        #druid_indexer_fork_property_druid_processing_buffer_sizeBytes: '104857600'
    
        druid_emitter_prometheus_port: 9090
        druid_indexer_runner.k8s_overlordUrl: "<http://druid-overlord:8081>"
    • Disabled middle manager deployment with druid deployment. • Created RABC with below config as mentioned in druid documentation kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: sparknet-applications name: druid-k8s-task-scheduler rules: - apiGroups: ["batch"] resources: ["jobs"] verbs: ["get", "watch", "list", "delete", "create"] - apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "watch", "list", "delete", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: druid-k8s-binding namespace: sparknet-applications subjects: - kind: ServiceAccount name: druid-overlord namespace: sparknet-applications roleRef: kind: Role name: druid-k8s-task-scheduler apiGroup: rbac.authorization.k8s.io Our ingestion type is kafka ingestion. the ingestion job getting spawned on k8s cluster, jobs/tasks are loading lookups and starting task lifecyle. But tasks gets stuck after starting and eventually failing with "errorMsg": "Peon did not report status successfully." Can someone please help what is causing this problem ?
  • a

    Abdullah Ömer Yamaç

    04/20/2025, 11:03 PM
    Hi all, I am running Druid on a single server. While compacting the data, I got OOM. Although I increased the JVM size of middleManager, nothing changed. Is there any idea to fix this?
    Copy code
    2025-04-20T22:58:55,359 INFO [[compact_mobility_nclokegf_2025-04-20T22:56:56.871Z]-batch-appenderator-push] org.apache.druid.segment.realtime.appenderator.BatchAppenderator - Push started, processsing[1] sinks
    Terminating due to java.lang.OutOfMemoryError: Java heap space
    The size of the compacting data is 1.2 GB.
    g
    • 2
    • 4
  • m

    Mahesha Subrahamanya

    04/23/2025, 12:40 AM
    Hello Team, I'd like to see if anybody has run multiple SQL statements within a transaction. Is this possible with the latest Druid version? Thanks in advance.
    g
    • 2
    • 2
  • s

    Sachit Swaroop NB

    04/23/2025, 4:35 PM
    Hi All, I've observed a consistent pattern where newly created supervisor tasks significantly outperform resumed ones, even after hard resets. Key observations: 1. New supervisor tasks achieve high throughput (8+ MB/s) while identical resumed supervisors struggle at much lower rates (KB/s) 2. The performance difference persists even after hard resetting and resuming existing supervisors 3. This creates significant challenges when working with transformation/reindexing workflows that require multiple iterations on the same supervisor configuration 4. There appears to be some form of internal "weightage" or prioritization that favors new tasks over resumed ones, affecting resource allocation This behavior forces us to create new supervisors rather than resume existing ones whenever possible, which is problematic for our iterative transformation workflows. Has anyone else observed this behavior? Is there a documented explanation for this prioritization difference? Are there configuration settings that can equalize resource allocation between new and resumed supervisors?
    g
    • 2
    • 4
  • a

    Andrew Ho

    04/25/2025, 7:06 PM
    Hi Druid experts. We're observing some inconsistent behavior with UNNEST on nested arrays. I've created this issue, but also posting here for more visibility. We're interested in contributing here, but would appreciate some pointers. Thank you!
    j
    g
    • 3
    • 2
  • m

    Mahesha Subrahamanya

    04/26/2025, 7:57 PM
    Hello team, We used to run 100 million datasets from s3 file. File Ingestion runs perfectly fine with 3 parallel threads however we have few SQL ingestion where it brings nearly almost all records into Druid and doing some joins after 2 hrs 30 mins tasks gettings Failed/cancelled. Once in a while runs fine with these datasets, however, 2/3 times fails (10% of SUCCESS) so could anybody looking at our MM config and also any other server-level (overload/indexer/mm) properties to setup to avoid this issue. Please let me know. Thanks With below MM on Kubernetes config
    Copy code
    middlemanager:
      replicas: 5
      minReplicas: 5
      maxReplicas: 18
      numMergeBuffers: 2
      bufferSizeBytes: 120MiB
      numThreadsProcessing: 2
      numThreadsHttp: 32
      workerCapacity: 2
      runnerJavaOpts:
        xms: 2g
        xmx: 12g
        MaxDirectMemorySize: 2g
      cpuRequest: 5000m
      memoryRequest: 30Gi
      memoryLimit: 30Gi
      ephemeralStorageLimit: "32Gi"
    l
    • 2
    • 5
  • l

    Lis Shimoni

    04/28/2025, 3:03 PM
    Hey all i'm trying to add iceberg extension to my druid but with no success. I added it on extentions_loadlist and also the jars installations. I am using druid version 32.0.1 is it supported there? log error:
    Copy code
    Invalid value for the field [inputSource]. Reason: [Cannot construct instance of `org.apache.druid.iceberg.input.GlueIcebergCatalog`, problem: Cannot initialize Catalog implementation org.apache.iceberg.aws.glue.GlueCatalog: Cannot find constructor for interface org.apache.iceberg.catalog.Catalog Missing org.apache.iceberg.aws.glue.GlueCatalog [java.lang.ClassNotFoundException: org.apache.iceberg.aws.glue.GlueCatalog]
    g
    • 2
    • 3
  • a

    Abhishek Balaji Radhakrishnan

    04/28/2025, 10:08 PM
    hi 👋 ! Any thoughts on the unnest planning behavior described in https://github.com/apache/druid/issues/17951? Wrote a quidem test to help repo the issue here: https://github.com/apache/druid/pull/17952. Thanks!
    g
    • 2
    • 2
1...4950515253Latest