Kumar Basapuram
11/28/2023, 6:18 AM==> /var/log/druid/router.log <==
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - ==> PolicyRefresher(serviceName=druiddev).loadPolicy()
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.ranger.perf.policyengine.init - In-Use memory: 105841104, Free memory:938016304
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - ==> PolicyRefresher(serviceName=druiddev).loadPolicyfromPolicyAdmin()
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.ranger.admin.client.RangerAdminRESTClient - ==> RangerAdminRESTClient.getServicePoliciesIfUpdated(-1, 0)
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.ranger.admin.client.RangerAdminRESTClient - Checking Service policy if updated as user : druid (auth:SIMPLE)
2023-11-28T06:13:35,037 DEBUG [Thread-12] org.apache.hadoop.security.UserGroupInformation - PrivilegedAction [as: druid (auth:SIMPLE)][action: org.apache.ranger.admin.client.RangerAdminRESTClient$3@28722562]
java.lang.Exception: null
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) ~[?:?]
at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:137) ~[?:?]
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:251) ~[?:?]
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:191) ~[?:?]
at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:161) ~[?:?]
2023-11-28T06:13:35,040 WARN [Thread-12] org.apache.ranger.admin.client.RangerAdminRESTClient - Error getting policies. secureMode=true, user=druid (auth:SIMPLE), response={"httpStatusCode":401,"statusCode":0}, serviceName=druiddev
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.admin.client.RangerAdminRESTClient - <== RangerAdminRESTClient.getServicePoliciesIfUpdated(-1, 0): null
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - PolicyRefresher(serviceName=druiddev).run(): no update found. lastKnownVersion=-1
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.perf.policyengine.init - [PERF] PolicyRefresher.loadPolicyFromPolicyAdmin(serviceName=druiddev): 3
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - <== PolicyRefresher(serviceName=druiddev).loadPolicyfromPolicyAdmin()
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - ==> PolicyRefresher(serviceName=druiddev).loadFromCache()
2023-11-28T06:13:35,040 WARN [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - cache file does not exist or not readable '/etc/ranger/druid/policycache/druid_druiddev.json'
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - <== PolicyRefresher(serviceName=druiddev).loadFromCache()
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.perf.policyengine.init - In-Use memory: 106420200, Free memory:937437208
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.perf.policyengine.init - [PERF] PolicyRefresher.loadPolicy(serviceName=druiddev): 3
2023-11-28T06:13:35,040 DEBUG [Thread-12] org.apache.ranger.plugin.util.PolicyRefresher - <== PolicyRefresher(serviceName=druiddev).loadPolicy()
What could be the issue.? any help would be appreciated.Gulshan Mishra
11/29/2023, 10:49 AM{
"error": "druidException",
"errorCode": "notFound",
"persona": "USER",
"category": "NOT_FOUND",
"errorMessage": "Query [query-7a1493c9-8b80-4f67-babd-39f5b314f8888] was not found. The query details are no longer present or might not be of the type [query_controller]. Verify that the id is correct.",
"context": {}
}
Chandni
11/29/2023, 3:31 PMRatheesh Kumar
11/29/2023, 5:36 PMRB
11/30/2023, 5:17 AMERROR [CoordinatorRuleManager-Exec--0] org.apache.druid.server.router.CoordinatorRuleManager - Exception while polling for rules
org.apache.druid.java.util.common.IOE: A leader node could not be found for [COORDINATOR] service. Check logs of service [COORDINATOR] to confirm it is healthy.Utkarsh Chaturvedi
11/30/2023, 10:30 AMHagen Rother
11/30/2023, 10:50 AMuseDefaultValueForNull=true
and a query against a segment build with useDefaultValueForNull=false
is executed? Will the result be defaulted where null is stored?
• Is it possible to have useDefaultValueForNull=true
in query but useDefaultValueForNull=false
in intake? (I.e. prepare the migration and start created new-style segments as early as possible)jakubmatyszewski
11/30/2023, 12:57 PM/status/health
) returning 429 error (too many queries).
In my broker config I set:
druid.broker.http.numConnections=10
druid.server.http.numThreads=41
druid.query.scheduler.numThreads=39
I thought that reserving 2 threads (difference between druid.server.http.numThreads
and druid.query.scheduler.numThreads
would suffice to ensure healtchecks to pass. I think I might misunderstand this. Anyone could explain?Manjunath Davanam
11/30/2023, 1:26 PM"timestampSpec": {
"column": "timestamp",
"format": "auto"
}
Julian Reyes
11/30/2023, 2:48 PMSatish N
11/30/2023, 6:43 PMHongan Pan
12/01/2023, 2:57 AMManjunath Davanam
12/01/2023, 6:43 AM"timestampSpec": {
"column": "timestamp",
"format": "auto"
}
RB
12/01/2023, 9:17 AMERROR [Master-PeonExec--0] org.apache.druid.server.coordinator.loading.HttpLoadQueuePeon - Request[<https://10.240.0.7:8283/druid-internal/v1/segments/changeRequests?timeout=300000>] Failed with status[0]. Reason[null].
org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
at org.apache.druid.java.util.http.client.NettyHttpClient.go(NettyHttpClient.java:134) ~[druid-processing-28.0.0.jar:28.0.0]
at org.apache.druid.server.coordinator.loading.HttpLoadQueuePeon.doSegmentManagement(HttpLoadQueuePeon.java:206) ~[druid-server-28.0.0.jar:28.0.0]
at org.apache.druid.server.coordinator.loading.HttpLoadQueuePeon.lambda$start$0(HttpLoadQueuePeon.java:351) ~[druid-server-28.0.0.jar:28.0.0]
at org.apache.druid.java.util.common.concurrent.ScheduledExecutors$4.run(ScheduledExecutors.java:163) ~[druid-processing-28.0.0.jar:28.0.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_392]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_392]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_392]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_392]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_392]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_392]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_392]
Nir Bar On
12/04/2023, 7:07 AMvivek kalola
12/04/2023, 9:42 AMKrishna
12/04/2023, 6:24 PM"errorMsg": "CannotParseExternalData: Unable to add the row to the frame. Type conversion might be required."
Krishna
12/04/2023, 8:08 PMUtkarsh Chaturvedi
12/05/2023, 6:16 AMRB
12/06/2023, 7:27 AMQuoc Khanh
12/06/2023, 9:23 AMMonirul Islam
12/06/2023, 11:41 AMTask: index_parallel_dispatchernginxlogs_pfhjhmif_2023-12-06T11:01:54.608Z
Then i had another task which throws error like this.
2023-12-06T11:08:12,130 WARN [main] org.apache.druid.indexing.common.config.TaskConfig - Batch processing mode argument value is null or not valid:[null], defaulting to[CLOSED_SEGMENTS]
2023-12-06T11:08:12,678 INFO [main] org.apache.druid.segment.loading.SegmentLocalCacheManager - Using storage location strategy: [LeastBytesUsedStorageLocationSelectorStrategy]
2023-12-06T11:08:12,981 WARN [main] org.eclipse.jetty.server.handler.gzip.GzipHandler - minGzipSize of 0 is inefficient for short content, break even is size 23
2023-12-06T11:08:14,499 INFO [main] org.apache.druid.segment.loading.SegmentLocalCacheManager - Using storage location strategy: [LeastBytesUsedStorageLocationSelectorStrategy]
2023-12-06T11:08:14,508 WARN [task-runner-0-priority-0] org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask - maxNumConcurrentSubTasks[1] is less than or equal to 1. Running sequentially. Please set maxNumConcurrentSubTasks to something higher than 1 if you want to run in parallel ingestion mode.
2023-12-06T11:08:14,530 WARN [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Chat handler is already registered. Skipping chat handler registration.
2023-12-06T11:08:20,663 ERROR [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Encountered exception in BUILD_SEGMENTS.
java.io.UncheckedIOException: /druiddata/tmp
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2135) ~[commons-io-2.11.0.jar:2.11.0]
at org.apache.commons.io.FileUtils.iterateFiles(FileUtils.java:1904) ~[commons-io-2.11.0.jar:2.11.0]
at org.apache.druid.data.input.impl.LocalInputSource.getDirectoryListingIterator(LocalInputSource.java:155) ~[druid-core-24.0.2.jar:24.0.2]
at org.apache.druid.data.input.impl.LocalInputSource.getFileIterator(LocalInputSource.java:132) ~[druid-core-24.0.2.jar:24.0.2]
at org.apache.druid.data.input.impl.LocalInputSource.formattableReader(LocalInputSource.java:201) ~[druid-core-24.0.2.jar:24.0.2]
at org.apache.druid.data.input.AbstractInputSource.reader(AbstractInputSource.java:42) ~[druid-core-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.AbstractBatchIndexTask.inputSourceReader(AbstractBatchIndexTask.java:216) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.InputSourceProcessor.process(InputSourceProcessor.java:82) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.IndexTask.generateAndPublishSegments(IndexTask.java:922) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.IndexTask.runTask(IndexTask.java:526) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.AbstractBatchIndexTask.run(AbstractBatchIndexTask.java:187) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runSequential(ParallelIndexSupervisorTask.java:1199) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask.runTask(ParallelIndexSupervisorTask.java:532) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.common.task.AbstractBatchIndexTask.run(AbstractBatchIndexTask.java:187) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:477) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:449) ~[druid-indexing-service-24.0.2.jar:24.0.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_362]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_362]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
Caused by: java.nio.file.NoSuchFileException: /druiddata/tmp/task/index_parallel_dispatchernginxlogs_pfhjhmif_2023-12-06T11:01:54.608Z
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:1.8.0_362]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_362]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:1.8.0_362]
at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427) ~[?:1.8.0_362]
at java.nio.file.Files.newDirectoryStream(Files.java:457) ~[?:1.8.0_362]
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:300) ~[?:1.8.0_362]
at java.nio.file.FileTreeWalker.next(FileTreeWalker.java:372) ~[?:1.8.0_362]
at java.nio.file.Files.walkFileTree(Files.java:2706) ~[?:1.8.0_362]
at org.apache.commons.io.FileUtils.listAccumulate(FileUtils.java:2076) ~[commons-io-2.11.0.jar:2.11.0]
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2132) ~[commons-io-2.11.0.jar:2.11.0]
... 19 more
2023-12-06T11:08:20,688 WARN [task-runner-0-priority-0] org.apache.druid.segment.realtime.firehose.ServiceAnnouncingChatHandlerProvider - handler[index_parallel_adservernginxlogs_dcfgiepc_2023-12-06T11:08:08.093Z] not currently registered, ignoring.
Finished peon task
the error is it could not find the file for the other datasource task
Caused by: java.nio.file.NoSuchFileException: /druiddata/tmp/task/index_parallel_dispatchernginxlogs_pfhjhmif_2023-12-06T11:01:54.608Z
Can someone happen to have this kind of issues?Maytas Monsereenusorn
12/07/2023, 6:46 AMSELECT FLOOR(__time TO DAY) AS event_time,
channel,
SUM(delta) AS change,
RANK() OVER w AS rank_value
FROM wikipedia
WHERE '2016-06-28' > FLOOR(__time TO DAY) > '2016-06-26'
GROUP BY channel, __time
WINDOW w AS (PARTITION BY channel ORDER BY SUM(delta) ASC)
suppose to work? (getting java.lang.NullPointerException)Sumanau Sareen
12/07/2023, 6:57 AM[Thu Dec 7 06:16:02 2023] Sending signal[15] to command[middleManager] (timeout 10s).
[Thu Dec 7 06:16:07 2023] Command[middleManager] exited (pid = 898, exited = 143)
[Thu Dec 7 06:16:07 2023] Sending signal[15] to command[broker] (timeout 10s).
[Thu Dec 7 06:16:07 2023] Sending signal[15] to command[router] (timeout 10s).
[Thu Dec 7 06:16:07 2023] Sending signal[15] to command[coordinator-overlord] (timeout 10s).
[Thu Dec 7 06:16:07 2023] Sending signal[15] to command[historical] (timeout 10s).
[Thu Dec 7 06:16:08 2023] Command[router] exited (pid = 895, exited = 143)
[Thu Dec 7 06:16:08 2023] Command[broker] exited (pid = 894, exited = 143)
[Thu Dec 7 06:16:08 2023] Command[coordinator-overlord] exited (pid = 896, exited = 143)
[Thu Dec 7 06:16:08 2023] Command[historical] exited (pid = 897, exited = 143)
[Thu Dec 7 06:16:08 2023] Sending signal[15] to command[zk] (timeout 10s).
[Thu Dec 7 06:16:08 2023] Command[zk] exited (pid = 893, exited = 143)
[Thu Dec 7 06:16:08 2023] Exiting.
CPU and RAM on the machine never went above 80%jp
12/07/2023, 8:29 AMTask failed with status JobStatus(active=null, completedIndexes=null, completionTime=null, conditions=[JobCondition(lastProbeTime=2023-12-06T22:02:40Z, lastTransitionTime=2023-12-06T22:02:40Z, message=Job was active longer than specified deadline, reason=DeadlineExceeded, status=True, type=Failed, additionalProperties={})], failed=1, ready=1, startTime=2023-12-06T18:02:40Z, succeeded=null, uncountedTerminatedPods=UncountedTerminatedPods(failed=[], succeeded=[], additionalProperties={}), additionalProperties={})
anyone have idea?Uday Singh Matta
12/07/2023, 1:06 PMSELECT FLOOR(__time to HOUR) AS ___time, Min("content.activeEnergyImportTotal"), Max("content.activeEnergyImportTotal")
, SUM("content.activeEnergyImportTotal"), AVG("content.activeEnergyImportTotal"), deviceID
from "EnergyMeterST501"
where "__time" > '2023-12-06T07:54:24.604Z' AND "deviceID"='2020070041'
GROUP BY 1, "deviceID"
I am attaching the output of rest just need to add the difference part here as well..Bob Stewart
12/07/2023, 7:20 PMenes topuz
12/08/2023, 7:28 AMDidip Kerabat
12/08/2023, 6:12 PMAkhil Gupta
12/09/2023, 12:00 PM{
"event": [
{
"id": 1,
"name": "John",
"value": 10,
"timestamp": "2022-01-01T00:00:00Z"
},
{
"id": 2,
"name": "Jane",
"value": 20,
"timestamp": "2022-01-01T00:00:00Z"
},
{
"id": 3,
"name": "Bob",
"value": 30,
"timestamp": "2022-01-01T00:00:00Z"
}
]
}
Here the desired rows in the druid data source will look like:
id name value timestamp
1 John 10 2022-01-01T00:00:00Z
2 Jane 20 2022-01-01T00:00:00Z
3 Bob 30 2022-01-01T00:00:00Z