rough-flag-51828
08/17/2022, 3:12 PMminiature-ram-76637
08/17/2022, 3:33 PM│ Warning error 25m (x8 over 25m) helm-controller Helm install failed: template: datahub/charts/datahub-mce-consumer/templates/deployment.yaml:61:27: executing "datahub/charts/datahub-mce-consumer/templa │
│ tes/deployment.yaml" at <.Values.global.datahub.monitoring.enablePrometheus>: nil pointer evaluating interface {}.enablePrometheus
alert-coat-46957
08/18/2022, 12:15 AMTraceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/datahub/emitter/rest_emitter.py", line 241, in _emit_generic
response.raise_for_status()
File "/home/airflow/.local/lib/python3.9/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: <https://xxxx.company.com:8080/entities?action=ingest>
Similarly for looker, we are getting below error. Any thoughts?
datahub ingest run -c /tmp/datahub/ingest/18ded50c-08cb-4133-af8c-8b7c669b459e/recipe.yml
[2022-08-17 22:46:27,610] INFO {datahub.cli.ingest_cli:99} - DataHub CLI version: 0.8.41
[2022-08-17 22:46:27,638] INFO {datahub.ingestion.run.pipeline:160} - Sink configured successfully. DataHubRestEmitter: configured to talk to <http://xxx-xxx-datahub-datahub-gms:8080>
[2022-08-17 22:50:28,272] ERROR {datahub.ingestion.run.pipeline:126} - Failed to initialize Looker client. Please check your configuration.
[2022-08-17 22:50:28,273] INFO {datahub.cli.ingest_cli:115} - Starting metadata ingestion
[2022-08-17 22:50:28,273] INFO {datahub.cli.ingest_cli:133} - Finished metadata pipeline
Failed to configure source (looker) due to Failed to initialize Looker client. Please check your configuration.
❗Client-Server Incompatible❗ Your client version 0.8.41 is older than your server version 0.8.42. Upgrading the cli to 0.8.42 is recommended.
➡️ Upgrade via "pip install 'acryl-datahub==0.8.42'"
numerous-camera-74294
08/18/2022, 10:11 AMhappy-island-35913
08/21/2022, 7:36 AMsparse-forest-98608
08/22/2022, 7:13 AMsteep-vr-39297
08/22/2022, 8:35 AMdatahub-gms
pod.
[main] ERROR o.s.web.context.ContextLoader:313 - Context initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'recommendationServiceFactory': Unsatisfied dependency expressed through field 'topPlatformsCandidateSource'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'topPlatformsCandidateSourceFactory': Unsatisfied dependency expressed through field 'entityService'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'entityAspectDao' defined in com.linkedin.gms.factory.entity.EntityAspectDaoFactory: Unsatisfied dependency expressed through method 'createEbeanInstance' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ebeanServer' defined in com.linkedin.gms.factory.entity.EbeanServerFactory: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.ebean.EbeanServer]: Factory method 'createServer' threw exception; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.resolveFieldValue(AutowiredAnnotationBeanPostProcessor.java:659)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:639)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:119)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:399)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1431)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:619)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:953)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:401)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:292)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:103)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:746)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:423)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at org.eclipse.jetty.server.Server.doStart(Server.java:387)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.runner.Runner.run(Runner.java:519)
at org.eclipse.jetty.runner.Runner.main(Runner.java:564)
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'topPlatformsCandidateSourceFactory': Unsatisfied dependency expressed through field 'entityService'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'entityAspectDao' defined in com.linkedin.gms.factory.entity.EntityAspectDaoFactory: Unsatisfied dependency expressed through method 'createEbeanInstance' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ebeanServer' defined in com.linkedin.gms.factory.entity.EbeanServerFactory: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.ebean.EbeanServer]: Factory method 'createServer' threw exception; nested exception is java.lang.NullPointerException
.................................
The setup-job was executed normally.
I don't know what the problem is.
Help me.bland-orange-13353
08/23/2022, 11:04 AMfaint-translator-23365
08/23/2022, 4:47 PMfull-chef-85630
08/24/2022, 2:00 AMgreat-branch-515
08/24/2022, 7:51 AMnumerous-camera-74294
08/24/2022, 1:06 PMdatahub delete --hard --urn "..."
, but it is still being listed in the frontendcolossal-sandwich-50049
08/24/2022, 9:35 PMWARNING: AWS Glue Schema Registry DOES NOT have a python SDK. As such, python based libraries like ingestion or datahub-actions (UI ingestion) is not supported when using AWS Glue Schema Registry
https://datahubproject.io/docs/deploy/aws/#aws-glue-schema-registry
cc: @great-toddler-2251lemon-engine-23512
08/25/2022, 7:51 AMfull-chef-85630
08/25/2022, 8:22 AMbrash-rainbow-94208
08/25/2022, 10:17 AMsilly-finland-62382
08/25/2022, 5:34 PMable-evening-90828
08/25/2022, 7:38 PMvalueFrom:
secretKeyRef:
name: {{ .Values.oidcAuthentication.clientSecret.secretRef }}
key: {{ .Values.oidcAuthentication.clientSecret.secretKey }}
I am new to Helm, could someone experienced confirm that this is the right direction or there is some alternative that doesn't require changing the deployment template?late-insurance-69310
08/25/2022, 9:37 PMcuddly-arm-8412
08/26/2022, 1:37 AMfull-chef-85630
08/26/2022, 2:48 AMfrom datahub.ingestion.run import pipeline
from datahub.configuration.common import DynamicTypedConfig
config = pipeline.PipelineConfig(
source=pipeline.SourceConfig(
type="mysql",
config={
"username": "datahub",
"password": "datahub",
"database": "action_airflow",
"host_port": "10.196.48.76:3306",
"include_views": False,
"include_tables": True,
"table_pattern": {
"allow": ["action_airflow.*"]
},
"schema_pattern": {
"allow": ["action_airflow.*"]
},
"profiling": {
"enabled": True
}
}
),
sink=DynamicTypedConfig(
type="datahub-rest",
config={
"server": "<http://xxxx>",
"token": "xxxx"
}
)
)
pip = pipeline.Pipeline(config=config)
pip.run()
/home/shdzh/.local/lib/python3.9/site-packages/datahub/utilities/sqlalchemy_query_combiner.py:321: SADeprecationWarning: The Select.append_from() method is deprecated and will be removed in a future release. Use the generative method Select.select_from(). (deprecated since: 1.4)
combined_query.append_from(cte)
Failed to execute queue using combiner: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'xpbuxnmrasxaqeaz AS \n(SELECT count(*) AS count_1 \nFROM action_airflow.creator_ch' at line 1")
[SQL: WITH xpbuxnmrasxaqeaz AS
(SELECT count(*) AS count_1
FROM action_airflow.creator_channels)
SELECT xpbuxnmrasxaqeaz.count_1
FROM xpbuxnmrasxaqeaz]
(Background on this error at: <https://sqlalche.me/e/14/f405>)
Traceback (most recent call last):
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/cursors.py", line 148, in execute
result = self._query(query)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/cursors.py", line 310, in _query
conn.query(q)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 548, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 775, in _read_query_result
result.read()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 1156, in read
first_packet = self.connection._read_packet()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 725, in _read_packet
packet.raise_for_error()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/protocol.py", line 221, in raise_for_error
err.raise_mysql_exception(self._data)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/err.py", line 143, in raise_mysql_exception
raise errorclass(errno, errval)
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'xpbuxnmrasxaqeaz AS \n(SELECT count(*) AS count_1 \nFROM action_airflow.creator_ch' at line 1")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/shdzh/.local/lib/python3.9/site-packages/datahub/utilities/sqlalchemy_query_combiner.py", line 383, in flush
self._execute_queue(main_greenlet)
File "/home/shdzh/.local/lib/python3.9/site-packages/datahub/utilities/sqlalchemy_query_combiner.py", line 325, in _execute_queue
sa_res = _sa_execute_underlying_method(queue_item.conn, combined_query)
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 333, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
ret = self._execute_context(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/shdzh/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/cursors.py", line 148, in execute
result = self._query(query)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/cursors.py", line 310, in _query
conn.query(q)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 548, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 775, in _read_query_result
result.read()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 1156, in read
first_packet = self.connection._read_packet()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/connections.py", line 725, in _read_packet
packet.raise_for_error()
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/protocol.py", line 221, in raise_for_error
err.raise_mysql_exception(self._data)
File "/home/shdzh/.local/lib/python3.9/site-packages/pymysql/err.py", line 143, in raise_mysql_exception
raise errorclass(errno, errval)
full-chef-85630
08/26/2022, 7:30 AMthousands-solstice-2498
08/26/2022, 10:38 AMbetter-fireman-33387
08/28/2022, 1:09 PMbetter-orange-49102
08/29/2022, 11:58 AMgreat-branch-515
08/29/2022, 6:15 PM2022/08/29 18:07:17 Waiting for: https://<redacted>:443
2022/08/29 18:07:17 Received 200 from https://<redacted>:443
datahub_usage_event_policy exists
creating datahub_usage_event_index_template
{
"index_patterns": ["*datahub_usage_event*"],
"data_stream": { },
"priority": 500,
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"type": {
"type": "keyword"
},
"timestamp": {
"type": "date"
},
"userAgent": {
"type": "keyword"
},
"browserId": {
"type": "keyword"
}
}
},
"settings": {
"index.lifecycle.name": "datahub_usage_event_policy"
}
}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1167 100 647 100 520 27284 21928 --:--:-- --:--:-- --:--:-- 50739
2022/08/29 18:07:17 Command finished successfully.
}{"error":{"root_cause":[{"type":"invalid_index_template_exception","reason":"index_template [datahub_usage_event_index_template] invalid, cause [Validation Failed: 1: unknown setting [index.lifecycle.name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings;]"}],"type":"invalid_index_template_exception","reason":"index_template [datahub_usage_event_index_template] invalid, cause [Validation Failed: 1: unknown setting [index.lifecycle.name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings;]"},"status":400}%
2. kafka setup job is failing with these errors. We are using TLS endpoints for MSK bootstrap servers.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 6.1.4-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c9124241a6ff43bc
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1661796471985
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for adminclient-1 unregistered
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661796532063, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661796532063, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: listNodes
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1661796502062, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: fetchMetadata
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed
[kafka-admin-client-thread | adminclient-1] ERROR org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:563)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1329)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1260)
at java.lang.Thread.run(Thread.java:750)
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661796532062, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661796532062, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
Can someone help in above errors?victorious-xylophone-76105
08/29/2022, 8:32 PMdatahub-frontend-react
getting:
=> [base 2/3] RUN addgroup -S datahub && adduser -S datahub -G datahub 2.5s
=> [base 3/3] RUN apk --no-cache --update-cache --available upgrade && apk --no-cache add curl openjdk8-jre 35.7s
=> [prod-build 2/4] RUN apk --no-cache --update-cache --available upgrade && apk --no-cache add perl openjdk8 12.2s
=> [prod-build 3/4] COPY . datahub-src 36.7s
=> ERROR [prod-build 4/4] RUN cd datahub-src && ./gradlew :datahub-web-react:build -x test -x yarnTest -x yarnLint && ./gradlew :datahub-frontend:dist -P 145.2s
------
> [prod-build 4/4] RUN cd datahub-src && ./gradlew :datahub-web-react:build -x test -x yarnTest -x yarnLint && ./gradlew :datahub-frontend:dist -PenableEmber=false -PuseSystemNode=true -x test -x yarnTest -x yarnLint && cp datahub-frontend/build/distributions/datahub-frontend.zip ../datahub-frontend.zip && cd .. && rm -rf datahub-src && unzip datahub-frontend.zip:
#12 0.753 Downloading <https://services.gradle.org/distributions/gradle-6.9.2-bin.zip>
#12 1.874 ......................................................................................................
#12 7.578
#12 7.578 Welcome to Gradle 6.9.2!
...
#12 105.4 > Configure project :smoke-test
#12 105.4 Root directory: /datahub-src
#12 140.3
#12 140.3 > Task :datahub-web-react:distTar NO-SOURCE
#12 142.2 > Task :datahub-web-react:nodeSetup FAILED
#12 142.2
#12 142.2 FAILURE: Build failed with an exception.
#12 142.2 * What went wrong:
#12 142.2 Execution failed for task ':datahub-web-react:nodeSetup'.
#12 142.2 > Could not resolve all files for configuration ':datahub-web-react:detachedConfiguration1'.
#12 142.2 > Could not find org.nodejs:node:16.8.0.
#12 142.2 Searched in the following locations:
#12 142.2 - <https://plugins.gradle.org/m2/org/nodejs/node/16.8.0/node-16.8.0.pom>
#12 142.2 - file:/root/.m2/repository/org/nodejs/node/16.8.0/node-16.8.0.pom
#12 142.2 - <https://repo.maven.apache.org/maven2/org/nodejs/node/16.8.0/node-16.8.0.pom>
#12 142.2 - <https://packages.confluent.io/maven/org/nodejs/node/16.8.0/node-16.8.0.pom>
#12 142.2 - <https://linkedin.jfrog.io/artifactory/open-source/org/nodejs/node/16.8.0/node-16.8.0.pom>
#12 142.2 - <https://nodejs.org/dist/v16.8.0/node-v16.8.0-linux-aarch64.tar.gz>
#12 142.2 Required by:
#12 142.2 project :datahub-web-react
Is that version of nodejs no longer available? If so where should I change it?
Thanks!great-branch-515
08/30/2022, 7:46 AMbetter-fireman-33387
08/30/2022, 8:40 AMERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661848715277, tries=1, nextAllowedTryMs=1661848715378) timed out at 1661848715278 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1661848715277, tries=1, nextAllowedTryMs=1661848715378) timed out at 1661848715278 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
[main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
Can anyone assist please?better-fireman-33387
08/30/2022, 11:14 AMError: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http): missing required field "paths" in io.k8s.api.networking.v1.HTTPIngressRuleValue
anyone can help please?