billions-family-12217
02/02/2023, 4:23 AMgray-window-62144
02/02/2023, 5:08 AMserver {
listen 443 ssl;
server_name <http://datahub.my.com|datahub.my.com>;
ssl_certificate /nginx_ssl_my_com.crt;
ssl_certificate_key /_wildcard_my_com_SHA256WITHRSA.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers '';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_headers_hash_max_size 1024;
proxy_headers_hash_bucket_size 128;
proxy_pass <http://backend_datahub>;
proxy_ssl_session_reuse on;
proxy_set_header Host <http://datahub.my.com|datahub.my.com>;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
client_max_body_size 10M;
}
error_page 404 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
upstream backend_datahub {
server localhost:9002;
keepalive 5;
}
rough-car-65301
02/02/2023, 2:31 PMUnable to run quickstart - the following issues were detected:
- datahub-gms is still starting
- elasticsearch-setup is still running
- elasticsearch is running but not healthy
rough-car-65301
02/02/2023, 2:32 PMhelpful-judge-79691
02/03/2023, 1:13 PMblue-house-86514
02/03/2023, 1:41 PMHello everyone,
was able to successfully integrate a SAP Hana database as a data source. However, the documentation does not state which rights the database user must have.
Can anyone tell me the roles/permissions or point me to where to find this in the documentation.
Thanks to the group!
Rainer
white-controller-18446
02/04/2023, 2:44 PMdamp-lion-51223
02/06/2023, 1:24 AMdamp-lion-51223
02/06/2023, 1:24 AMworried-jordan-515
02/06/2023, 6:24 AMfreezing-account-90733
02/06/2023, 5:08 PMsalmon-spring-51500
02/06/2023, 6:57 PMrapid-crowd-46218
02/07/2023, 9:05 AM[minikube@localhost datahub]$ kubectl describe pods datahub-datahub-gms-6667df7bdc-564g8
Name: datahub-datahub-gms-6667df7bdc-564g8
Namespace: default
Priority: 0
Service Account: datahub-datahub-gms
Node: minikube/192.168.49.2
Start Time: Tue, 07 Feb 2023 17:10:35 +0900
Labels: <http://app.kubernetes.io/instance=datahub|app.kubernetes.io/instance=datahub>
<http://app.kubernetes.io/name=datahub-gms|app.kubernetes.io/name=datahub-gms>
pod-template-hash=6667df7bdc
Annotations: <none>
Status: Running
IP: 172.17.0.11
IPs:
IP: 172.17.0.11
Controlled By: ReplicaSet/datahub-datahub-gms-6667df7bdc
Containers:
datahub-gms:
Container ID: <docker://5f9d02decd7dc211faa911d56017de8f47ef3eb0da9713bfcb99989e998c90a>c
Image: linkedin/datahub-gms:head
Image ID: <docker-pullable://linkedin/datahub-gms@sha256:85cf456fe4756fddcb5fc03f45b083002e293cd2a38ce7feba7307ee5db3f365>
Ports: 8080/TCP, 4318/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 07 Feb 2023 17:38:36 +0900
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Tue, 07 Feb 2023 17:33:36 +0900
Finished: Tue, 07 Feb 2023 17:38:36 +0900
Ready: False
Restart Count: 5
Liveness: http-get http://:http/health delay=60s timeout=1s period=30s #success=1 #failure=8
Readiness: http-get http://:http/health delay=60s timeout=1s period=30s #success=1 #failure=8
Environment:
ENABLE_PROMETHEUS: true
MCE_CONSUMER_ENABLED: true
MAE_CONSUMER_ENABLED: true
PE_CONSUMER_ENABLED: true
ENTITY_REGISTRY_CONFIG_PATH: /datahub/datahub-gms/resources/entity-registry.yml
DATAHUB_ANALYTICS_ENABLED: true
EBEAN_DATASOURCE_USERNAME: root
EBEAN_DATASOURCE_PASSWORD: <set to the key 'mysql-root-password' in secret 'mysql-secrets'> Optional: false
EBEAN_DATASOURCE_HOST: prerequisites-mysql:3306
EBEAN_DATASOURCE_URL: jdbc:<mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2>
EBEAN_DATASOURCE_DRIVER: com.mysql.cj.jdbc.Driver
KAFKA_BOOTSTRAP_SERVER: prerequisites-kafka:9092
KAFKA_SCHEMAREGISTRY_URL: <http://prerequisites-cp-schema-registry:8081>
ELASTICSEARCH_HOST: elasticsearch-master
ELASTICSEARCH_PORT: 9200
SKIP_ELASTICSEARCH_CHECK: false
ELASTICSEARCH_USE_SSL: false
GRAPH_SERVICE_IMPL: neo4j
NEO4J_HOST: prerequisites-neo4j-community:7474
NEO4J_URI: <bolt://prerequisites-neo4j-community>
NEO4J_USERNAME: neo4j
NEO4J_PASSWORD: <set to the key 'neo4j-password' in secret 'neo4j-secrets'> Optional: false
UI_INGESTION_ENABLED: true
SECRET_SERVICE_ENCRYPTION_KEY: <set to the key 'encryption_key_secret' in secret 'datahub-encryption-secrets'> Optional: false
UI_INGESTION_DEFAULT_CLI_VERSION: 0.9.1
SEARCH_SERVICE_ENABLE_CACHE: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m6jc4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m6jc4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29m default-scheduler Successfully assigned default/datahub-datahub-gms-6667df7bdc-564g8 to minikube
Normal Pulling 29m kubelet Pulling image "linkedin/datahub-gms:head"
Normal Pulled 26m kubelet Successfully pulled image "linkedin/datahub-gms:head" in 2m59.664678032s
Normal Created 26m kubelet Created container datahub-gms
Normal Started 26m kubelet Started container datahub-gms
Warning Unhealthy 21m (x8 over 25m) kubelet Liveness probe failed: Get "<http://172.17.0.11:8080/health>": dial tcp 172.17.0.11:8080: connect: connection refused
Normal Killing 21m kubelet Container datahub-gms failed liveness probe, will be restarted
Warning Unhealthy 4m17s (x53 over 25m) kubelet Readiness probe failed: Get "<http://172.17.0.11:8080/health>": dial tcp 172.17.0.11:8080: connect: connection refused
I think k8s initialDelaySeconds setting is wrong. How I fix this issue? Is this solve to edit values.yaml file? Somethings strange is that I succed deploy datahub in last weeks using same setting. 😭purple-terabyte-64712
02/07/2023, 3:15 PMmammoth-memory-21997
02/07/2023, 8:23 PMsalmon-spring-51500
02/07/2023, 9:58 PMfamous-quill-82626
02/08/2023, 3:55 AMplain-nest-12882
02/08/2023, 5:27 AMrapid-crowd-46218
02/08/2023, 12:45 PMNAME READY STATUS RESTARTS AGE
datahub-acryl-datahub-actions-76f5459c6c-nkc8x 1/1 Running 1 (2m9s ago) 7m59s
datahub-datahub-frontend-d89c96686-chxcn 1/1 Running 0 7m59s
datahub-datahub-gms-77cb7f874d-fvshc 0/1 Running 1 (2m28s ago) 7m59s
datahub-datahub-upgrade-job-bmrr5 0/1 Error 0 7m59s
datahub-datahub-upgrade-job-nmj54 0/1 Error 0 2m2s
datahub-datahub-upgrade-job-rb6ll 0/1 Error 0 4m31s
datahub-datahub-upgrade-job-xhv2n 1/1 Running 0 50s
datahub-datahub-upgrade-job-xkvm2 0/1 Error 0 3m19s
datahub-elasticsearch-setup-job-fqjnz 0/1 Completed 0 9m59s
datahub-kafka-setup-job-mxbg6 0/1 Completed 0 9m47s
datahub-mysql-setup-job-wthlk 0/1 Completed 0 8m10s
elasticsearch-master-0 1/1 Running 0 12m
prerequisites-cp-schema-registry-5f89dd4974-cvxfn 2/2 Running 0 12m
prerequisites-kafka-0 1/1 Running 0 12m
prerequisites-mysql-0 1/1 Running 0 12m
prerequisites-neo4j-community-0 1/1 Running 0 12m
prerequisites-zookeeper-0 1/1 Running 0 12m
[minikube@localhost datahub]$ kubectl describe pods datahub-datahub-gms-77cb7f874d-fvshc
Name: datahub-datahub-gms-77cb7f874d-fvshc
Namespace: default
Priority: 0
Service Account: datahub-datahub-gms
Node: minikube/192.168.49.2
Start Time: Wed, 08 Feb 2023 21:22:54 +0900
Labels: <http://app.kubernetes.io/instance=datahub|app.kubernetes.io/instance=datahub>
<http://app.kubernetes.io/name=datahub-gms|app.kubernetes.io/name=datahub-gms>
pod-template-hash=77cb7f874d
Annotations: <none>
Status: Running
IP: 172.17.0.9
IPs:
IP: 172.17.0.9
Controlled By: ReplicaSet/datahub-datahub-gms-77cb7f874d
Containers:
datahub-gms:
Container ID: <docker://99645772cacad19af5d3c102a221e7cbb1748baa9769706165cd5296dc44a01>a
Image: linkedin/datahub-gms:head
Image ID: <docker-pullable://linkedin/datahub-gms@sha256:d2c8a7fd6075f9efa53cbd7a3bd9a58a6de1f242101db84ca7aa7d71b3f8d17e>
Ports: 8080/TCP, 4318/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 08 Feb 2023 21:33:55 +0900
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Wed, 08 Feb 2023 21:28:25 +0900
Finished: Wed, 08 Feb 2023 21:33:55 +0900
Ready: False
Restart Count: 2
Limits:
memory: 2Gi
Requests:
cpu: 100m
memory: 1Gi
Liveness: http-get http://:http/health delay=180s timeout=1s period=30s #success=1 #failure=5
Readiness: http-get http://:http/health delay=180s timeout=1s period=30s #success=1 #failure=5
Environment:
ENABLE_PROMETHEUS: true
MCE_CONSUMER_ENABLED: true
MAE_CONSUMER_ENABLED: true
PE_CONSUMER_ENABLED: true
ENTITY_REGISTRY_CONFIG_PATH: /datahub/datahub-gms/resources/entity-registry.yml
DATAHUB_ANALYTICS_ENABLED: true
EBEAN_DATASOURCE_USERNAME: root
EBEAN_DATASOURCE_PASSWORD: <set to the key 'mysql-root-password' in secret 'mysql-secrets'> Optional: false
EBEAN_DATASOURCE_HOST: prerequisites-mysql:3306
EBEAN_DATASOURCE_URL: jdbc:<mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2>
EBEAN_DATASOURCE_DRIVER: com.mysql.cj.jdbc.Driver
KAFKA_BOOTSTRAP_SERVER: prerequisites-kafka:9092
KAFKA_SCHEMAREGISTRY_URL: <http://prerequisites-cp-schema-registry:8081>
SCHEMA_REGISTRY_TYPE: KAFKA
ELASTICSEARCH_HOST: elasticsearch-master
ELASTICSEARCH_PORT: 9200
SKIP_ELASTICSEARCH_CHECK: false
ELASTICSEARCH_USE_SSL: false
GRAPH_SERVICE_IMPL: elasticsearch
UI_INGESTION_ENABLED: true
SECRET_SERVICE_ENCRYPTION_KEY: <set to the key 'encryption_key_secret' in secret 'datahub-encryption-secrets'> Optional: false
UI_INGESTION_DEFAULT_CLI_VERSION: 0.9.6
SEARCH_SERVICE_ENABLE_CACHE: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dddgd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-dddgd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/datahub-datahub-gms-77cb7f874d-fvshc to minikube
Normal Pulling 11m kubelet Pulling image "linkedin/datahub-gms:head"
Normal Pulled 11m kubelet Successfully pulled image "linkedin/datahub-gms:head" in 13.012530772s
Normal Killing 6m17s kubelet Container datahub-gms failed liveness probe, will be restarted
Normal Created 6m16s (x2 over 11m) kubelet Created container datahub-gms
Normal Started 6m16s (x2 over 11m) kubelet Started container datahub-gms
Normal Pulled 6m16s kubelet Container image "linkedin/datahub-gms:head" already present on machine
Warning Unhealthy 2m17s (x7 over 8m17s) kubelet Liveness probe failed: Get "<http://172.17.0.9:8080/health>": dial tcp 172.17.0.9:8080: connect: connection refused
Warning Unhealthy 77s (x13 over 8m17s) kubelet Readiness probe failed: Get "<http://172.17.0.9:8080/health>": dial tcp 172.17.0.9:8080: connect: connection refused
First time, I thought this was a problem with initialDelaySeconds. So with the help of other users, I created a value file with initialDelaySeconds and tried to install it. (180s)
helm install datahub datahub/datahub --values ./datahub/datahub-values.yaml
However, the same error continues to occur. The same error occurs even if I change my network environment. What I don't understand is that last week I succeeded in installing and collecting with exactly the same settings as now. Is there anyone who can give me a solution? For your information, I am using CentOS7 in VM. I attach the error log and the used file. Thank you in advance.witty-butcher-82399
02/08/2023, 4:36 PMastonishing-answer-96712
02/08/2023, 8:00 PMrapid-spoon-75609
02/08/2023, 9:15 PMbetter-orange-49102
02/09/2023, 1:57 AMsilly-dog-87292
02/09/2023, 4:19 PMcolossal-ambulance-28715
02/10/2023, 2:00 AMwide-airline-88304
02/10/2023, 4:53 AMwhite-library-14765
02/10/2023, 1:53 PMcreamy-machine-95935
02/10/2023, 2:09 PMclean-tomato-22549
02/13/2023, 6:21 AMgoogle/okta/azure
are supported.
# OIDC auth based on <https://datahubproject.io/docs/authentication/guides/sso/configure-oidc-react>
oidcAuthentication:
enabled: false
# provider: google/okta/azure <- choose only one
https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/subcharts/datahub-frontend/values.yaml
On official doc, there is no PingFederate Provider-Specific guide either.
https://datahubproject.io/docs/authentication/guides/sso/configure-oidc-react
Question 2: Can I remove sso login part on UI, if my service has not configured SSO?white-horse-97256
02/13/2023, 9:44 PM