wonderful-spring-3326
03/09/2023, 10:44 AMrapid-airport-61849
03/09/2023, 11:50 AMModuleNotFoundError: No module named 'pyodbc'
? That is quickstart docker compose.kind-lifeguard-14131
03/09/2023, 2:04 PMbig-ocean-9800
03/09/2023, 9:52 PMwhite-horse-97256
03/09/2023, 10:00 PMport: "9200"
insecure: "true"
useSSL: "true"
skipcheck: "true"
white-horse-97256
03/10/2023, 3:29 AMCaused by: <http://javax.net|javax.net>.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
this exception for Elastic Search. I have also included all the keystore/trusttore values in helm config: Can someone please help me on this....I have been struggling on this for a week now!
- name: ELASTICSEARCH_HOST
value: "{{ .Values.global.elasticsearch.host }}"
- name: ELASTICSEARCH_PORT
value: "{{ .Values.global.elasticsearch.port }}"
- name: SKIP_ELASTICSEARCH_CHECK
value: "{{ .Values.global.elasticsearch.skipcheck }}"
{{- with .Values.global.elasticsearch.useSSL }}
- name: ELASTICSEARCH_USE_SSL
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.elasticsearch.auth }}
- name: ELASTICSEARCH_USERNAME
value: {{ .username }}
- name: ELASTICSEARCH_PASSWORD
{{- if .password.value }}
value: {{ .password.value | quote }}
{{- else }}
valueFrom:
secretKeyRef:
name: "{{ .password.secretRef }}"
key: "{{ .password.secretKey }}"
{{- end }}
{{- end }}
- name: ELASTICSEARCH_SSL_PROTOCOL
value: "{{ .Values.elastic.protocol }}"
- name: ELASTICSEARCH_SSL_TRUSTSTORE_FILE
value: "{{ .Values.elastic.truststore }}"
- name: ELASTICSEARCH_SSL_TRUSTSTORE_TYPE
value: "{{ .Values.elastic.trustType }}"
- name: ELASTICSEARCH_SSL_TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.elastic.secretEnv.secretRef }}
key: {{ .Values.elastic.secretEnv.secretKey }}
- name: ELASTICSEARCH_SSL_KEYSTORE_FILE
value: "{{ .Values.elastic.keystore }}"
- name: ELASTICSEARCH_SSL_KEYSTORE_TYPE
value: "{{ .Values.elastic.trustType }}"
- name: ELASTICSEARCH_SSL_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.elastic.secretEnv.secretRef }}
key: {{ .Values.elastic.secretEnv.secretKey }}
kind-lifeguard-14131
03/10/2023, 1:40 PMshy-jackal-85882
03/10/2023, 7:58 PM>datahub docker quickstart --quickstart-compose-file C:\Users\<username>\.datahub\quickstart\docker-compose-without-neo4j.quickstart.yml
Saved quickstart config to C:\Users\<username>/.datahub/quickstart/quickstart_version_mapping.yaml.
[+] Running 12/12
- Container mysql Running 0.0s
- Container zookeeper Running 0.0s
- Container datahub-upgrade Started 1.0s
- Container mysql-setup Running 0.0s
- Container broker Running 0.0s
- Container elasticsearch Running 0.0s
- Container datahub-gms Running 0.0s
- Container schema-registry Running 0.0s
- Container kafka-setup Started 1.1s
- Container datahub-datahub-actions-1 Running 0.0s
- Container elasticsearch-setup Started 1.0s
- Container datahub-frontend-react Running 0.0s
Could this involve MySQL? I have an other process that is using port 3306, so I changed all the 3306 ports in the docker-compose-without-neo4j.quickstart.yml file to 3307.
PS C:\Users\jdunson> Get-NetTCPConnection | where Localport -eq 3306 | select Localport,OwningProcess
Localport OwningProcess
--------- -------------
3306 18956
3306 11160
shy-jackal-85882
03/10/2023, 7:59 PMhandsome-football-66174
03/10/2023, 9:45 PM2023-03-10 21:19:38.443:WARN:oejsh.ErrorHandler:qtp1125736023-88: Error page too large: 500 javax.servlet.ServletException: org.springframework.web.util.NestedServletException: Async processing failed; nested exception is java.lang.StackOverflowError Request(POST //<hostname>/api/graphql)@e82557e
2023-03-10 21:19:38.444:INFO:oejsh.ErrorHandler:qtp1125736023-88: Disabling showsStacks for ErrorPageErrorHandler@295c6a0c{STARTED}
Exception in thread "gmsEbeanServiceConfig.heartBeat" java.lang.RuntimeException: invalid key or spec in GCM mode
worried-animal-81235
03/12/2023, 11:13 PMworried-animal-81235
03/12/2023, 11:15 PMworried-animal-81235
03/12/2023, 11:15 PMworried-animal-81235
03/12/2023, 11:36 PMworried-animal-81235
03/12/2023, 11:37 PMastonishing-dusk-99990
03/13/2023, 10:27 AMechoing-scientist-29330
03/13/2023, 11:20 AMUnable to run quickstart - the following issues were detected:
- datahub-gms is running by not yet healthy
Any help would be greatly appreciated.rich-daybreak-77194
03/13/2023, 1:29 PMbig-plumber-87113
03/13/2023, 7:02 PMmake_dataset_urn_with_platform_instance
rather than make_dataset_urn
when generating urns for dataset entities.Status(removed=False)
to your dataset snapshot object, for example
snapshot = DatasetSnapshot(
urn=dataset_urn,
aspects=[Status(removed=False)],
)
melodic-ambulance-87164
03/13/2023, 7:44 PMdamp-dentist-81742
03/14/2023, 9:49 AMagreeable-belgium-70840
03/14/2023, 12:21 PM2023-03-14 12:14:56,367 [pool-11-thread-1] ERROR c.l.m.s.e.query.ESSearchDAO:61 - Search query failed
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=index_not_found_exception, reason=no such index [datahubpolicyindex_v2]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:187)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1911)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1888)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1645)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1602)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1572)
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1088)
at com.linkedin.metadata.search.elasticsearch.query.ESSearchDAO.executeAndExtract(ESSearchDAO.java:57)
at com.linkedin.metadata.search.elasticsearch.query.ESSearchDAO.search(ESSearchDAO.java:90)
at com.linkedin.metadata.search.elasticsearch.ElasticSearchService.fullTextSearch(ElasticSearchService.java:111)
at com.linkedin.metadata.client.JavaEntityClient.search(JavaEntityClient.java:312)
at com.datahub.authorization.PolicyFetcher.fetchPolicies(PolicyFetcher.java:50)
at com.datahub.authorization.PolicyFetcher.fetchPolicies(PolicyFetcher.java:42)
at com.datahub.authorization.DataHubAuthorizer$PolicyRefreshRunnable.run(DataHubAuthorizer.java:223)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [<https://vpc-awie-es-dataeng-dh-01-r3us77zzpucodobuxpbyicgmgu.eu-west-1.es.amazonaws.com:443>], URI [/datahubpolicyindex_v2/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 404 Not Found]
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [datahubpolicyindex_v2]","resource.type":"index_or_alias","resource.id":"datahubpolicyindex_v2","index_uuid":"_na_","index":"datahubpolicyindex_v2"}],"type":"index_not_found_exception","reason":"no such index [datahubpolicyindex_v2]","resource.type":"index_or_alias","resource.id":"datahubpolicyindex_v2","index_uuid":"_na_","index":"datahubpolicyindex_v2"},"status":404}
I ran the elasticsearch-setup-job. Is there any special parameter needed there? why are the indexes missing?agreeable-belgium-70840
03/14/2023, 1:03 PMorg.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerConsumerRebalanceListener failed on invocation of onPartitionsAssigned for partitions [DataHubUpgradeHistory_v1-0]
java.lang.IllegalArgumentException: seek offset must not be a negative number
at org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1599)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$InitialOrIdleSeekCallback.seek(KafkaMessageListenerContainer.java:3075)
at com.linkedin.metadata.kafka.boot.DataHubUpgradeKafkaListener.lambda$onPartitionsAssigned$1(DataHubUpgradeKafkaListener.java:70)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1764)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at com.linkedin.metadata.kafka.boot.DataHubUpgradeKafkaListener.onPartitionsAssigned(DataHubUpgradeKafkaListener.java:69)
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.onPartitionsAssigned(MessagingMessageListenerAdapter.java:302)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.seekPartitions(KafkaMessageListenerContainer.java:1127)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$3800(KafkaMessageListenerContainer.java:518)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerConsumerRebalanceListener.onPartitionsAssigned(KafkaMessageListenerContainer.java:2968)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsAssigned(ConsumerCoordinator.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:419)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:439)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:490)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1414)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1251)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1163)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
In my understanding, that means that the application is trying to seek for messages in a negative offset. Is this the error? Why is this happening? Any ideas?thousands-printer-59538
03/14/2023, 1:09 PMwide-optician-47025
03/14/2023, 2:35 PMwide-optician-47025
03/14/2023, 2:35 PMwide-optician-47025
03/14/2023, 2:35 PMwide-optician-47025
03/14/2023, 2:35 PMwide-optician-47025
03/14/2023, 2:36 PMwide-optician-47025
03/14/2023, 2:36 PM