Is this issue resolved - <https://github.com/apach...
# general
a
m
Yes the enhancement was merged
a
Ok, thanks. Is there an example of multi argument aggregation function that I can refer to?
a
That however takes only a single column as input.
For aggregation function with multiple column names as input, the Map<ExpressionContext, BlockValSet> blockValSetMap parameter in the aggregation method may not be sufficient.
As each value for column1 could be associated with multiple values of column2.
Any idea on how to handle this?
m
What's your requirement?
a
I was looking to do the following query: select bucket(ts, 1 min), avg_then_sum(ts, dimensioncolumn, value) group by bucket(ts, 1 min)
m
Theta sketch is just an example. Custom aggr functions can now take any number of args
a
Basically, within each 1 min bucket, I want to average all metric values grouped by dimension and then sum those averaged values across the dimension column.
m
You planning to write a custom aggr function?
a
Yes, unless there is a better alternative
I was told that subqueries are not yet supported.
m
In
Map<ExpressionContext, BlockValSet> blockValSetMap
, each entry in the map can be a column
or expression on the column
a
Yes. However, each blockset’s getIntValuesSV and other methods return an array of values.
So for column1 and column2 (assuming they are both single value int columns), getIntValuesSV will return same number of elements? And they will be in the same order (by docIds)?
If that’s the case it should work for me.
m
Yes same order
a
Got it - so individual array elements of both columns will correspond to the same docId. Right?
m
Yep
a
Thanks for confirming.
I plan to proceed along - let me know if you have any alternate ways / suggestions, etc.
appreciate your help.
m
👍