Adrian Kurtasiński
06/28/2023, 2:53 PMCould not get capacity info
Could not get detail info
MiddleManager contains druid.worker.capacity=
where should I start debug?Slackbot
06/28/2023, 4:22 PMSlackbot
06/28/2023, 6:25 PMSlackbot
06/28/2023, 7:32 PMSlackbot
06/29/2023, 5:21 AMSlackbot
06/29/2023, 6:19 AMSlackbot
06/29/2023, 6:39 AMSlackbot
06/29/2023, 2:35 PMSlackbot
06/30/2023, 4:19 AMSlackbot
06/30/2023, 2:39 PMSlackbot
06/30/2023, 4:44 PMSlackbot
07/01/2023, 12:56 AMSlackbot
07/01/2023, 1:34 PMSlackbot
07/03/2023, 3:37 AMMaytas Monsereenusorn
07/03/2023, 6:41 AMAli Baraghithi
07/03/2023, 8:47 AMShankar Mane
07/03/2023, 10:39 AM2023-07-02T18:30:21,785 WARN [ServerInventoryView-0] org.apache.druid.curator.inventory.CuratorInventoryManager - Exception while
getting data for node /druid/segments/ip-10-xxx-xxx-xxx.ap-south-1.compute.internal:8100/ip-10-xxx-xxx-xxx.ap-south-1.compute.internal
:8100_indexer-executor__default_tier_2023-07-02T17:29:22.522Z_08cee3bb400
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /druid/segments/ip-10-xxx-xxx-xxx.ap-south-1.comp
ute.internal:8100/ip-10-xxx-xxx-xxx.ap-south-1.compute.internal:8100_indexer-executor__default_tier_2023-07-02T17:29:22.522Z_08cee3bb400
at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) ~[zookeeper-3.5.9.jar:3.5.9]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) ~[zookeeper-3.5.9.jar:3.5.9]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:2131) ~[zookeeper-3.5.9.jar:3.5.9]
at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:327) ~[curator-framework-4.3.0.jar:
4.3.0]
at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:316) ~[curator-framework-4.3.0.jar:
4.3.0]
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
~[curator-client-4.3.0.jar:?]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81) ~[curator-client-4.3.0.jar:?]
at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:313) ~[curator-framework-4.3.0.jar:4.3.0]
at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:304) ~[curator-framework-4.3.0.jar:4.3.0]
at org.apache.curator.framework.imps.GetDataBuilderImpl$1.forPath(GetDataBuilderImpl.java:107) ~[curator-framework-4.3.0.jar:4.3.0]
at org.apache.curator.framework.imps.GetDataBuilderImpl$1.forPath(GetDataBuilderImpl.java:67) ~[curator-framework-4.3.0.jar:4.3.0]
at org.apache.druid.curator.inventory.CuratorInventoryManager.getZkDataForNode(CuratorInventoryManager.java:188) [druid-server-0.22.1.jar:0.22.1]
at org.apache.druid.curator.inventory.CuratorInventoryManager.access$400(CuratorInventoryManager.java:59) [druid-server-0.22.1.jar:0.22.1]
at org.apache.druid.curator.inventory.CuratorInventoryManager$ContainerCacheListener$InventoryCacheListener.childEvent(CuratorInventoryManager.java:412) [druid-server-0.22.1.jar:0.22.1]
jakubmatyszewski
07/03/2023, 11:27 AMResource limit exceeded
error:
DruidResourceError: Query failed <Response [400]> Bad Request b'{"error":"Resource limit exceeded","errorMessage":"Not enough dictionary memory to execute this query. Try enabling disk spilling by setting druid.query.groupBy.maxOnDiskStorage to an amount of bytes available on your machine for on-disk scratch files. See <https://druid.apache.org/docs/latest/querying/groupbyquery.html#memory-tuning-and-resource-limits> for details.","errorClass":"org.apache.druid.query.ResourceLimitExceededException","host":"druid-historical-hot-ondemand-0.druid-historical-hot-ondemand.druid.svc.cluster.local:8083"}'.
This happens even with druid.query.groupBy.maxOnDiskStorage
set to 1024M
. I'm experimenting with increasing this value right now, but I wonder whether adding ephemeral-storage
for broker pods might be of use here? What do you think? And if yes, I wonder whether there is some additional setup required for the ephemeral-storage to be properly utilized by broker..?Slackbot
07/04/2023, 12:27 AMVineeth
07/04/2023, 6:45 AMSachidananda
07/04/2023, 9:16 AM[_default_tier]: Replicant create queue stuck after 2+ runs!: {class=org.apache.druid.server.coordinator.ReplicationThrottler, segments=[segment_id ON histoical-ip:port]}
any idea when this error comes?
Thanks in advanceSlackbot
07/04/2023, 12:32 PMSlackbot
07/04/2023, 11:20 PMSlackbot
07/05/2023, 11:01 AMSlackbot
07/05/2023, 1:19 PMstruct.ai
07/06/2023, 10:41 AMSlackbot
07/06/2023, 11:48 AMSlackbot
07/07/2023, 3:37 AMSlackbot
07/07/2023, 6:28 PMSlackbot
07/08/2023, 12:01 PM