Soman Ullah
06/29/2023, 8:32 PMWHERE test IN (select test from static_table)
Siddharth Gautam
06/29/2023, 9:03 PMCaused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:175) ~[?:?]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118) ~[?:?]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[?:?]
at org.apache.druid.segment.data.CompressionStrategy$LZ4Compressor.allocateInBuffer(CompressionStrategy.java:338) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.data.BlockLayoutColumnarDoublesSerializer.<init>(BlockLayoutColumnarDoublesSerializer.java:72) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.data.CompressionFactory.getDoubleSerializer(CompressionFactory.java:426) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.DoubleColumnSerializer.open(DoubleColumnSerializer.java:70) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.NumericDimensionMergerV9.<init>(NumericDimensionMergerV9.java:52) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.DoubleDimensionMergerV9.<init>(DoubleDimensionMergerV9.java:32) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.DoubleDimensionHandler.makeMerger(DoubleDimensionHandler.java:87) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:203) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.IndexMergerV9.merge(IndexMergerV9.java:1155) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.IndexMergerV9.multiphaseMerge(IndexMergerV9.java:972) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.IndexMergerV9.persist(IndexMergerV9.java:876) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.IndexMerger.persist(IndexMerger.java:225) ~[druid-processing-26.0.0.jar:26.0.0]
at org.apache.druid.segment.realtime.appenderator.AppenderatorImpl.persistHydrant(AppenderatorImpl.java:1525) ~[druid-server-26.0.0.jar:26.0.0]
at org.apache.druid.segment.realtime.appenderator.AppenderatorImpl.access$100(AppenderatorImpl.java:114) ~[druid-server-26.0.0.jar:26.0.0]
at org.apache.druid.segment.realtime.appenderator.AppenderatorImpl$2.call(AppenderatorImpl.java:653) ~[druid-server-26.0.0.jar:26.0.0]
The weird thing is, it still occurs when my CSV is only 10 rows. What does "java.lang.OutofMemory: Direct buffer memory" error mean?Jiaojiao Fu
06/30/2023, 3:16 AMdruid.coordinator.balancer.strategy=random
, balance or decommission segment doesn’t trigger movement of segments. I see the code, it won’t return the destination of the proposal segment. Someone can tell me the detail of this design? thanks~
@Override
public ServerHolder findNewSegmentHomeBalancer(DataSegment proposalSegment, List<ServerHolder> serverHolders)
{
return null; //To change body of implemented methods use File | Settings | File Templates.
}
Bharat Thakur
06/30/2023, 6:07 AMRitesh Ranjan
06/30/2023, 6:39 AMNolan Bebarta
06/30/2023, 11:15 PMKumar Saket
07/03/2023, 8:48 AMPHP Dev
07/03/2023, 11:41 AMJan Werner
07/03/2023, 7:43 PMVineeth
07/04/2023, 6:45 AMRatheesh Kumar
07/04/2023, 12:11 PMAhmad Qasem
07/05/2023, 10:43 AMJess Pham
07/05/2023, 1:24 PMDavid McHealy
07/05/2023, 1:53 PMjp
07/06/2023, 2:33 AMdruid.router.tierToBrokerMap={"hot":"druid:broker-hot","_default_tier":"druid:broker-cold"}
here, hot tier is routed to broker-hot. But how does router decides tier?
I'm guessing that router uses the retention rule to decide tier. If query's time is included in hot tier then it sends query to broker-hot and if its included in range of default_tier then sends to broker-cold.
If it's wrong can you explain how router decides tier?Struct.AI Knowledge Base
07/06/2023, 10:44 AMlnault
07/06/2023, 1:29 PMtable
.
In that table
I have a column myKey
So looks like that :
__time myKey
date1 key1
date2 key1
date3 key2
date4 key1
...
I need to get keys in specific quantile (for instance, keys in quartile from 25% to 50%.
I have to write the query in SQL, no native druid query possible here.
I can easily have the count for each key, not a problem.
SELECT myKey, COUNT(*) AS keyCount
FROM table
GROUP BY myKey
Result:
myKey keyCount
key1 5
key2 3
key3 10
key4 2
...
I can also get the percent of each key
select myKey, keyCount, total, CAST(keyCount AS DOUBLE) / total as "percentage"
from(
SELECT myKey, count(*) as keyCount, (select COUNT(*) FROM "07ba2ecb-0acb-4905-bc87-25bb0a6e54c8-1") as total
FROM table
group by myKey
)
Result:
myKey keyCount percentage
key1 5 0.25
key2 3 0.15
key3 10 0.5
key4 2 0.1
...
but I am failing in cumulating the counts or percentages to easily get the keys which are in quantile (n, m).
In "classic" SQL we could use windowing, but I didn't managed to do it in druid.
Same, in "classic" SQL we could use JOIN
with ON t2.keyCount <= t1.keyCount
but in druid SQL queries we need to have ON ... = ...
.
I'm a little bit stuck here, so any help would be appreciated.
(sorry for the long post, hope my problem is clear)Saurabh Pande
07/06/2023, 3:26 PMk
factor which is by default 128 but even with 128, theoretical error % is 1.7. Are there any other factors that affect accuracy of sketches in druid? what are some other ways I can try out to resolve this issue?Madhav Gunampalli
07/06/2023, 5:30 PMGaurav
07/07/2023, 3:36 AMVineeth
07/07/2023, 4:40 AMAbhishek Agarwal
07/10/2023, 7:22 AMAhmad Qasem
07/10/2023, 12:54 PM"version"
& "timestamp"
and "_*event*_"
, not sure if this is the case for everyone, which contains the result set for the case of sending query type of groupBy
, but when sending a request to the sql link, it returns objects of the result set directly.
Is there a way to structure the response coming from druid to get it in the same structure as the native query?Tomasz Melcer
07/10/2023, 3:42 PMAli Baraghithi
07/11/2023, 9:33 AMMatthew Aguerreberry
07/11/2023, 12:29 PMHolden Ackerman
07/11/2023, 5:40 PMkevin
07/12/2023, 8:25 AMSiddharth Gautam
07/12/2023, 1:12 PMJRob
07/12/2023, 7:29 PM