Max ZorN
03/28/2024, 3:55 PMselect * , SUM("value") over (partition by dimemnsion) from test_table
But I'm getting an error:
Error: SQL query is unsupported
Query not supported. Possible error: SQL query requires 'SUM' operator that is not supported. SQL was: select * , SUM("value") over (partition by dim) from test_table
org.apache.calcite.plan.RelOptPlanner$CannotPlanException
I can't understand in which release window functions were introduced and how to activate them (I specified {"enableWindowing": true} in the context but it didnt help). Our Druid version is 25.0.0.Aru Raghuwanshi
03/28/2024, 6:32 PMMax ZorN
03/29/2024, 7:29 AMGiwrgos Gkοlfopoulos
04/01/2024, 12:52 PMorg.apache.druid.query.Query
known type ids = [dataSourceMetadata, groupBy, scan, search, segmentMetadata, select, timeBoundary, timeseries, topN, windowOperator] at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 14]
com.fasterxml.jackson.databind.exc.InvalidTypeIdExceptionNoor
04/03/2024, 8:15 AMАлексей Ясинский
04/03/2024, 11:11 AMIgor Berman
04/04/2024, 8:00 AMCREATE TABLE druid_upgradeSegments (
id varchar(255) not null,
task_id VARCHAR(255) NOT NULL,
segment_id VARCHAR(255) NOT NULL,
lock_version VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
)
Алексей Ясинский
04/04/2024, 4:07 PMinitialAdminPassword
and initialInternalClientPassword
upon the first start? Is it possible to do this only directly through the database in the druid_config
table?Алексей Ясинский
04/05/2024, 9:09 AMPrabir Choudhury
04/08/2024, 12:40 PMSELECT "kpi_search_string" AS "kpi_search_string"
FROM (
SELECT DISTINCT *
FROM (
VALUES ('Write string', 'Write string', 'Write string')
) AS X("kpi_search_string","formula_search_string","tables_search_string")
)
GROUP BY "kpi_search_string"
ORDER BY "kpi_search_string" ASC
Error:
Error: INVALID_INPUT (ADMIN)
Query could not be planned. A possible reason is [SQL query requires ordering a table by non-time column [[kpi_search_string]], which is not supported.]
Venugopal Vupparaboina
04/08/2024, 3:04 PMNo task reports were found for this task
for msq, in the MM less kubernetes setup.
I see that reports are getting created in S3, somehow fetching the msq query details is failing with 404 error
below is the exception from the broker
2024-04-08T14:37:16,767 INFO [qtp2088048274-132] org.apache.druid.msq.sql.resources.SqlStatementResource - Query details not found for queryId [query-cb756d66-ddf1-4398-9451-a921f93b2ac6]
org.apache.druid.rpc.HttpResponseException: Server error [404 Not Found]; body: No task reports were found for this task. The task may not exist, or it may not have completed yet.
at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:201) ~[druid-server-28.0.1.jar:28.0.1]
at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:183) ~[druid-server-28.0.1.jar:28.0.1]
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1138) ~[guava-31.1-jre.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:840) ~[?:?]
Slack ConversationYoungsol Koh
04/08/2024, 5:04 PM2024-04-08T16:29:20,970 INFO [ServiceClientFactory-2] org.apache.druid.rpc.ServiceClientImpl - Service [OVERLORD] request [POST <http://10.11.185.100:8081/druid/indexer/v1/action>] encountered server error [500 Internal Server Error] on attempt #5; retrying in 60,000 ms; first 1KB of body: {"error":"java.lang.RuntimeException: org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.CallbackFailedException: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@474a954 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries. [statement:\"SELECT payload FROM druid_segments WHERE used = :used AND dataSource = :dataSource AND ((start < :end0 AND `end` > :start0) OR (start = '-146136543-09-08T08:23:32.096Z' AND \"end\" != '146140482-04-24T15:36:27.903Z' AND \"end\" > :start0) OR (start != '-146136543-09-08T08:23:32.096Z' AND \"end\" = '146140482-04-24T15:36:27.903Z' AND start < :end0) OR (start < :end1 AND `end` > :start1) OR (start = '-146136543-09-08T08:23:32.096Z' AND \"end\" != '146140482-04-24T15:36
....ommited
I see 38 values of start and end each
When druid queries segments in the datasource, it will create a lot of conditions in where clause which will cause the above problem.
I could get it work when I set lateMessageRejectionPeriod
to certain value. I would like to know what else I can do since lateMessageRejectionPeriod
will discard data.Venugopal Vupparaboina
04/08/2024, 6:47 PM2024-04-08T18:39:40,260 ERROR [org.apache.druid.k8s.discovery.K8sDruidNodeDiscoveryProvider$NodeRoleWatcheroverlord] org.apache.druid.k8s.discovery.K8sDruidNodeDiscoveryProvider$NodeRoleWatcher - Expection while watching for NodeRole [OVERLORD].
java.lang.RuntimeException: java.lang.InterruptedException
at org.apache.druid.concurrent.LifecycleLock$Sync.awaitStarted(LifecycleLock.java:144) ~[druid-processing-28.0.1.jar:28.0.1]
at org.apache.druid.concurrent.LifecycleLock.awaitStarted(LifecycleLock.java:245) ~[druid-processing-28.0.1.jar:28.0.1]
at org.apache.druid.k8s.discovery.K8sDruidNodeDiscoveryProvider$NodeRoleWatcher.keepWatching(K8sDruidNodeDiscoveryProvider.java:257) ~[?:?]
at org.apache.druid.k8s.discovery.K8sDruidNodeDiscoveryProvider$NodeRoleWatcher.watch(K8sDruidNodeDiscoveryProvider.java:237) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:840) ~[?:?]
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) ~[?:?]
at org.apache.druid.concurrent.LifecycleLock$Sync.awaitStarted(LifecycleLock.java:139) ~[druid-processing-28.0.1.jar:28.0.1]
Noor
04/10/2024, 4:53 AMorg.apache.druid.java.util.metrics.AllocationMetricCollectors - Cannot initialize org.apache.druid.java.util.metrics.AllocationMetricCollector
java.lang.reflect.InaccessibleObjectException: Unable to make public long[] com.sun.management.internal.HotSpotThreadImpl.getThreadAllocatedBytes(long[]) accessible: module jdk.management does not "exports com.sun.management.internal" to unnamed module @1643d68f
at java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:340) ~[?:?]
at java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:280) ~[?:?]
at java.lang.reflect.Method.checkCanSetAccessible(Method.java:198) ~[?:?]
at java.lang.reflect.Method.setAccessible(Method.java:192) ~[?:?]
at org.apache.druid.java.util.metrics.AllocationMetricCollectors.<clinit>(AllocationMetricCollectors.java:41) ~[druid-core-25.0.0.jar:25.0.0]
at org.apache.druid.java.util.metrics.JvmMonitor.<init>(JvmMonitor.java:67) ~[druid-core-25.0.0.jar:25.0.0]
at org.apache.druid.java.util.metrics.JvmMonitor.<init>(JvmMonitor.java:59) ~[druid-core-25.0.0.jar:25.0.0]
at org.apache.druid.server.metrics.MetricsModule.getJvmMonitor(MetricsModule.java:154) ~[druid-server-25.0.0.jar:25.0.0]
at org.apache.druid.server.metrics.MetricsModule$$FastClassByGuice$$99ddce1b.invoke(<generated>) ~[druid-server-25.0.0.jar:25.0.0]
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.doProvision(ProviderMethod.java:264) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.ProviderMethod$Factory.provision(ProviderMethod.java:401) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.ProviderMethod$Factory.get(ProviderMethod.java:376) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-4.1.0.jar:?]
at org.apache.druid.guice.LifecycleScope$1.get(LifecycleScope.java:68) ~[druid-core-25.0.0.jar:25.0.0]
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1015) ~[guice-4.1.0.jar:?]
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1054) ~[guice-4.1.0.jar:?]
at org.apache.druid.server.metrics.MetricsModule.getMonitorScheduler(MetricsModule.java:113) ~[druid-server-25.0.0.jar:25.0.0]
Can some one help here pleaseRahul Sharma
04/10/2024, 5:52 AMUday Singh Matta
04/11/2024, 4:35 AM{
"queryType": "groupBy",
"dataSource": "EnergyMeterST501",
"intervals": [
"2023-12-23T00:00+03:00/2024-03-23T23:59:59+03:00"
],
"granularity": {
"type": "period",
"period": "P1D",
"timeZone": "Asia/Riyadh"
},
"dimensions": [
"deviceID"
],
"metric": "count",
"aggregations": [
{
"type": "count",
"name": "count",
"fieldName": "deviceID"
}
],
"having": {
"type": "filter",
"filter": {
"type": "in",
"dimension": "deviceID",
"values": [
"2020070100"
]
}
}
}
Kaustubh
04/11/2024, 12:49 PMJvalant Patel
04/11/2024, 6:50 PM2024-04-10T22:09:12,058 ERROR [task-runner-0-priority-0] org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner - Uncaught Throwable while running task[AbstractTask{id='partial_index_generic_merge_<datasource>_bnopakcn_2024-04-10T22:08:49.696Z', groupId='coordinator-issued_compact_<datasource>_dhfpacdl_2024-04-10T21:51:25.920Z', taskResource=TaskResource{availabilityGroup='partial_index_generic_merge_<datasource>_bnopakcn_2024-04-10T22:08:49.696Z', requiredCapacity=1}, dataSource='pnr_page_request', context={useLineageBasedSegmentAllocation=true, storeCompactionState=true, appenderatorTrackingTaskId=coordinator-issued_compact_<datasource>_dhfpacdl_2024-04-10T21:51:25.920Z, priority=25, forceTimeChunkLock=true}}]
java.lang.OutOfMemoryError: Cannot reserve 65536 bytes of direct buffer memory (allocated: 5368678031, limit: 5368709120)
at java.nio.Bits.reserveMemory(Bits.java:178) ~[?:?]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121) ~[?:?]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332) ~[?:?]
at org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:102) ~[druid-processing-29.0.0.jar:29.0.0]
at org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:95) ~[druid-processing-29.0.0.jar:29.0.0]
at org.apache.druid.collections.StupidPool.makeObjectWithHandler(StupidPool.java:184) ~[druid-processing-29.0.0.jar:29.0.0]
at org.apache.druid.collections.StupidPool.take(StupidPool.java:156) ~[druid-processing-29.0.0.jar:29.0.0]
We have configured peon tasks to run with 8 GB Heap and 6 GB Direct Memory. Tasks fail within the 2 minutes of the initialization. Does anyone know how to solve this? Any pointers will be super helpful. TIA.Sharmin Choksey
04/11/2024, 7:23 PMAditya Randive
04/11/2024, 11:44 PMapache-druid-29.0.1
directory.
curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json <http://localhost:8081/druid/indexer/v1/task>
After the task fails, on the console the error details is:
failed to run task with an exception. See middleManager or indexer logs for more details.
Can someone help guide me what may be going wrong?Bharat
04/12/2024, 8:53 AMBharat
04/12/2024, 8:54 AMAmit Jain
04/12/2024, 10:30 AMPio Salvatore MORRONE (KEBULA)
04/12/2024, 1:29 PMlukas
04/15/2024, 5:51 PM{
"type": "kill",
"id": "api-issued_kill_<datasource>_oicdplgc_1000-01-01T00:00:00.000Z_3000-01-01T00:00:00.000Z_2024-04-15T17:39:07.818Z",
"dataSource": "<datasource>",
"interval": "1000-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z",
"context": {
"forceTimeChunkLock": true,
"useLineageBasedSegmentAllocation": true
},
"batchSize": 100,
"limit": null,
"groupId": "api-issued_kill_<datasource>_oicdplgc_1000-01-01T00:00:00.000Z_3000-01-01T00:00:00.000Z_2024-04-15T17:39:07.818Z",
"resource": {
"availabilityGroup": "api-issued_kill_<datasource>_oicdplgc_1000-01-01T00:00:00.000Z_3000-01-01T00:00:00.000Z_2024-04-15T17:39:07.818Z",
"requiredCapacity": 1
}
}
The task runs successfully, but no unused segments have been deleted (neither in Postgres not in S3 in this case).
I can query them like this in Postgres:
select *
from druid_segments
where datasource = '<datasource>' and used = false
limit 10
and I still get results. Do you know what might be the problem?Jvalant Patel
04/15/2024, 8:03 PMJiaojiao Fu
04/16/2024, 7:54 AM2024-04-15T16:19:20,946 ERROR [task-runner-0-priority-0] org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner - Exception while running task[AbstractTask{id='index_parallel_custom_lists_bhnobale_2024-04-15T16:17:13.511Z', groupId='index_parallel_custom_lists_bhnobale_2024-04-15T16:17:13.511Z', taskResource=TaskResource{availabilityGroup='index_parallel_custom_lists_bhnobale_2024-04-15T16:17:13.511Z', requiredCapacity=1}, dataSource='custom_lists', context={forceTimeChunkLock=true, useLineageBasedSegmentAllocation=true}}]
java.lang.RuntimeException: org.apache.druid.rpc.HttpResponseException: Server error [400 Bad Request]; body: {"error":"Task[single_phase_sub_task_custom_lists_hdihfjom_2024-04-15T16:17:19.655Z] already exists!"}
at org.apache.druid.common.guava.FutureUtils.getUnchecked(FutureUtils.java:82) ~[druid-core-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.TaskMonitor.submitTask(TaskMonitor.java:320) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.TaskMonitor.submit(TaskMonitor.java:254) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexPhaseRunner.submitNewTask(ParallelIndexPhaseRunner.java:246) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexPhaseRunner.run(ParallelIndexPhaseRunner.java:135) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runNextPhase(ParallelIndexSupervisorTask.java:314) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runSinglePhaseParallel(ParallelIndexSupervisorTask.java:612) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runTask(ParallelIndexSupervisorTask.java:513) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.common.task.AbstractTask.run(AbstractTask.java:169) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:477) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:449) ~[druid-indexing-service-25.0.0.jar:25.0.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_221]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_221]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_221]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_221]
Caused by: org.apache.druid.rpc.HttpResponseException: Server error [400 Bad Request]; body: {"error":"Task[single_phase_sub_task_custom_lists_hdihfjom_2024-04-15T16:17:19.655Z] already exists!"}
at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:201) ~[druid-server-25.0.0.jar:25.0.0]
at org.apache.druid.rpc.ServiceClientImpl$1.onSuccess(ServiceClientImpl.java:183) ~[druid-server-25.0.0.jar:25.0.0]
at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.1.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_221]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_221]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_221]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_221]
... 3 more
Vadim Sohin
04/16/2024, 11:30 AMSiva
04/17/2024, 3:06 PMAmit Gold
04/18/2024, 7:15 AMred-analytics
and it worked fine. To complete some investigation, I created a new source using “Load Data -> Paste Data” option. After using it for a join
on the :8888
port “query” tab, I went to delete it in the Datasources tab using “mark as unused all segments -> delete unused segments” and it is no longer in that tab. However I still see it in the :80
port in the data cubes list…
And even worse, now the red-analytics
cube keeps renaming itself to red-analytics1
and back every couple minutes??? How do I revert all this 🫠