```06:33:37.977 [pool-12-thread-1] INFO c.l.m.fil...
# all-things-deployment
p
Copy code
06:33:37.977 [pool-12-thread-1] INFO  c.l.m.filter.RestliLoggingFilter:55 - GET /entitiesV2?ids=List(urn%3Ali%3Acorpuser%3Adatahub) - batchGet - 200 - 7ms
06:33:38.059 [I/O dispatcher 1] INFO  c.l.m.k.e.ElasticsearchConnector:41 - Successfully feeded bulk request. Number of events: 1 Took time ms: -1
06:33:41.636 [I/O dispatcher 1] ERROR c.l.m.s.e.update.BulkListener:25 - Failed to feed bulk request. Number of events: 1 Took time ms: -1 Message: failure in bulk execution:
[0]: index [datahubexecutionrequestindex_v2], type [_doc], id [urn%3Ali%3AdataHubExecutionRequest%3A79ec91eb-af3b-4baa-87b4-ade8f202dfce], message [[datahubexecutionrequestindex_v2/wfhCmJ_jR0e48O5ItrryJA][[datahubexecutionrequestindex_v2][0]] ElasticsearchException[Elasticsearch exception [type=document_missing_exception, reason=[_doc][urn%3Ali%3AdataHubExecutionRequest%3A79ec91eb-af3b-4baa-87b4-ade8f202dfce]: document missing]]]
06:33:42.648 [I/O dispatcher 1] INFO  c.l.m.s.e.update.BulkListener:28 - Successfully fed bulk request. Number of events: 4 Took time ms: -1
06:33:45.035 [Thread-283] WARN  c.l.m.s.e.q.r.SearchRequestHandler:444 - Found invalid filter field for entity search. Invalid or unrecognized facet ingestionSource
06:33:51.157 [Thread-286] WARN  c.l.m.s.e.q.r.SearchRequestHandler:444 - Found invalid filter field for entity search. Invalid or unrecognized facet ingestionSource
Hi team, can someone help me with this error I'm getting when doing the UI ingestion after setting up datahub in k8s
s
Details missing • How did you deploy in K8s? Helm charts? If yes, please add your helm values.yaml file here after masking all secrets • Which cloud provider? • What version of helm chart is being used? • What error is this causing for you on the UI? • What version of DataHub are you trying to run?
t
• How did you deploy in K8s? Helm charts? If yes, please add your helm values.yaml file here after masking all secrets - We used custom helm, attached here. • Which cloud provider? - EL deployed in Azure, and we deployed in native platform • What version of helm chart is being used? 3 • What error is this causing for you on the UI? @polite-application-51650 - please confirm • What version of DataHub are you trying to run? - v0.8.44
s
Can you reproduce the problem with the latest release of DataHub OSS? https://github.com/acryldata/datahub/releases Our small team size does not allow us to backport fixes. We recommend people to use latest server release. There is no attachment here. Also, with custom code it is really hard for us to help out. Can you please let us know the diff (related to DataHub) between your charts and https://github.com/acryldata/datahub-helm?
p
Hi @square-activity-64562 in UI there is no movement in the time elapsed so far field, all I can see is N/A in case of the actual time elapsed.
f
Same problem here. Any update?
s
Can you reproduce the problem with the latest release of DataHub OSS? Also it is not clear what this sentence means
Copy code
in UI there is no movement in the time elapsed so far field, all I can see is N/A in case of the actual time elapsed.
What movement is expected? What page? Any screenshots? What are you trying to accomplish? Please understand without steps to reproduce and these details no one will be able to help.
f
@square-activity-64562 I’ve tried to deploy elasticsearch with bitnami chart 17.9.29 (elastic version 7.17.3). Then I triggered the restore indices job, and get the same GMS logs as above. The UI is empty, looks like everything is gone. Do you have any suggestions? Thanks
s
This question remains unanswered @famous-florist-7218
Copy code
Can you reproduce the problem with the latest release of DataHub OSS?
And I have no idea whether you are from the same team? Is your environment exactly the same? Did you have anything in the UI before running the restore indices job? Was it every running in this environment?
f
I can confirm that I've tested on the latest datahub helm chart
0.2.106
. For the datahub prerequisites, I'm deploying them separately with our own MSK for kafka, MySQL RDS, and Elasticsearch. This thing happened when I upgrade the datahub version and replace both MySQL and Elasticsearch to the new cluster.
h
Hello, I see the same issue with my setup. Using external dependencies deployed with AWS managed services. When I ingest data, I see the following errors in the GMS logs. I don’t see all the entities ingested in the graphql search results. Any clue why this is happening?
Copy code
05:27:08.823 [I/O dispatcher 1] ERROR c.l.m.s.e.update.BulkListener:25 - Failed to feed bulk request. Number of events: 1 Took time ms: -1 Message: failure in bulk execution:
[0]: index [index_v2], type [doc], id [urn--], message [[indexv2/c4qv6FOWRqinADr_ONot2A][[hereslometricsindex_v2][0]] ElasticsearchException[Elasticsearch exception [type=document_missing_exception, reason=[_doc][urn%3A--]: document missing]]]
b
@helpful-byte-81711 were you able to find a solution? I'm seeing the same
r
Hi Good Morning @famous-florist-7218 | @polite-application-51650 | @helpful-byte-81711 | @broad-article-1339 Were you able to solve this issue?