early-afternoon-71938
10/06/2022, 1:00 PMHi, I am facing issues of pods being in pending and not running in EKS cluster after following the K8 deployment guide, can you please help:--~# kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 0/1 Pending 0 64m
elasticsearch-master-1 0/1 Pending 0 64m
elasticsearch-master-2 0/1 Pending 0 64m
prerequisites-cp-schema-registry-6f4b5b894f-8lzvj 1/2 CrashLoopBackOff 15 (38s ago) 64m
prerequisites-kafka-0 0/1 Pending 0 64m
prerequisites-mysql-0 0/1 Pending 0 64m
prerequisites-neo4j-community-0 0/1 Pending 0 64m
prerequisites-zookeeper-0 0/1 Pending 0 64m
:~# kubectl describe pods prerequisites-cp-schema-registry-6f4b5b894f-8lzvj
Name: prerequisites-cp-schema-registry-6f4b5b894f-8lzvj
Namespace: default
Priority: 0
Service Account: default
Node: ip-10-0-1-247.ec2.internal/10.0.1.247
Start Time: Thu, 06 Oct 2022 17:06:53 +0530
Labels: app=cp-schema-registry
pod-template-hash=6f4b5b894f
release=prerequisites
Annotations: <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
<http://prometheus.io/port|prometheus.io/port>: 5556
<http://prometheus.io/scrape|prometheus.io/scrape>: true
Status: Running
IP: 10.0.1.33
IPs:
IP: 10.0.1.33
Controlled By: ReplicaSet/prerequisites-cp-schema-registry-6f4b5b894f
Containers:
prometheus-jmx-exporter:
Container ID: <docker://d106dfe9388bd4e0009227c3d68bb83bc81bcdb530f0d2f3ad4a94dee19df75>1
Image: solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
Image ID: <docker-pullable://solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143>
Port: 5556/TCP
Host Port: 0/TCP
Command:
java
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:MaxRAMFraction=1
-XshowSettings:vm
-jar
jmx_prometheus_httpserver.jar
5556
/etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
State: Running
Started: Thu, 06 Oct 2022 17:06:54 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/jmx-schema-registry from jmx-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbgmf (ro)
cp-schema-registry-server:
Container ID: <docker://9effa12c8c8cd8a6585b56155f72b0a1e51b79e3b3ce31473c5cc3dbf4863bb>6
Image: confluentinc/cp-schema-registry:6.0.1
Image ID: <docker-pullable://confluentinc/cp-schema-registry@sha256:b52e16cf232e3c9acd677ae8944de813e16fa541a367d9f805b300c5d2be1a1f>
Ports: 8081/TCP, 5555/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 06 Oct 2022 18:27:55 +0530
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 06 Oct 2022 18:22:00 +0530
Finished: Thu, 06 Oct 2022 18:22:45 +0530
Ready: True
Restart Count: 18
Environment:
SCHEMA_REGISTRY_HOST_NAME: (v1:status.podIP)
SCHEMA_REGISTRY_LISTENERS: <http://0.0.0.0:8081>
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: prerequisites-kafka:9092
SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID: prerequisites
SCHEMA_REGISTRY_MASTER_ELIGIBILITY: true
SCHEMA_REGISTRY_HEAP_OPTS: -Xms512M -Xmx512M
JMX_PORT: 5555
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbgmf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jmx-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prerequisites-cp-schema-registry-jmx-configmap
Optional: false
kube-api-access-xbgmf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 6m29s (x18 over 81m) kubelet Container image "confluentinc/cp-schema-registry:6.0.1" already present on machine
Warning BackOff 91s (x307 over 80m) kubelet Back-off restarting failed container
bulky-electrician-72362
10/06/2022, 1:30 PMearly-afternoon-71938
10/06/2022, 2:08 PMbulky-electrician-72362
10/06/2022, 2:12 PM-c cp-schema-registry-server
to the command ?early-afternoon-71938
10/06/2022, 2:17 PMbulky-electrician-72362
10/06/2022, 2:33 PMearly-afternoon-71938
10/06/2022, 2:36 PMbulky-electrician-72362
10/06/2022, 2:55 PMearly-afternoon-71938
10/07/2022, 12:59 PMbulky-electrician-72362
10/07/2022, 1:02 PMearly-afternoon-71938
10/07/2022, 1:03 PMbulky-electrician-72362
10/10/2022, 12:02 PMearly-afternoon-71938
10/10/2022, 12:08 PMbulky-electrician-72362
10/10/2022, 12:21 PM0
?early-afternoon-71938
10/10/2022, 12:23 PM# Values to start up datahub after starting up the datahub-prerequisites chart with "prerequisites" release name
# Copy this chart and change configuration as needed.
datahub-gms:
enabled: true
image:
repository: linkedin/datahub-gms
tag: "v0.8.45"
datahub-frontend:
enabled: true
image:
repository: linkedin/datahub-frontend-react
tag: "v0.8.45"
# Set up ingress to expose react front-end
ingress:
enabled: false
acryl-datahub-actions:
enabled: true
image:
repository: acryldata/datahub-actions
tag: "v0.0.7"
resources:
limits:
memory: 512Mi
requests:
cpu: 300m
memory: 256Mi
datahub-mae-consumer:
image:
repository: linkedin/datahub-mae-consumer
tag: "v0.8.45"
datahub-mce-consumer:
image:
repository: linkedin/datahub-mce-consumer
tag: "v0.8.45"
datahub-ingestion-cron:
enabled: false
image:
repository: acryldata/datahub-ingestion
tag: "v0.8.45"
elasticsearchSetupJob:
enabled: true
image:
repository: linkedin/datahub-elasticsearch-setup
tag: "v0.8.44"
podSecurityContext:
fsGroup: 1000
securityContext:
runAsUser: 1000
podAnnotations: {}
kafkaSetupJob:
enabled: true
image:
repository: linkedin/datahub-kafka-setup
tag: "v0.8.45"
podSecurityContext:
fsGroup: 1000
securityContext:
runAsUser: 1000
podAnnotations: {}
mysqlSetupJob:
enabled: true
image:
repository: acryldata/datahub-mysql-setup
tag: "v0.8.45"
podSecurityContext:
fsGroup: 1000
securityContext:
runAsUser: 1000
podAnnotations: {}
postgresqlSetupJob:
enabled: false
image:
repository: acryldata/datahub-postgres-setup
tag: "v0.8.45"
podSecurityContext:
fsGroup: 1000
securityContext:
runAsUser: 1000
podAnnotations: {}
datahubUpgrade:
enabled: true
image:
repository: acryldata/datahub-upgrade
tag: "v0.8.45"
batchSize: 1000
batchDelayMs: 100
noCodeDataMigration:
sqlDbType: "MYSQL"
# sqlDbType: "POSTGRES"
podSecurityContext: {}
# fsGroup: 1000
securityContext: {}
# runAsUser: 1000
podAnnotations: {}
restoreIndices:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 300m
memory: 256Mi
prometheus-kafka-exporter:
enabled: false
kafkaServer:
- prerequisites-kafka:9092 # <<release-name>>-kafka:9092
# Sarama logging
sarama:
logEnabled: true
prometheus:
serviceMonitor:
enabled: true
namespace: monitoring
interval: "30s"
# If serviceMonitor is enabled and you want prometheus to automatically register
# target using serviceMonitor, add additionalLabels with prometheus release name
# e.g. If you have deployed kube-prometheus-stack with release name kube-prometheus
# then additionalLabels will be
# additionalLabels:
# release: kube-prometheus
additionalLabels: {}
targetLabels: []
global:
graph_service_impl: neo4j
datahub_analytics_enabled: true
datahub_standalone_consumers_enabled: true
elasticsearch:
host: "elasticsearch-master"
port: "9200"
skipcheck: "false"
insecure: "false"
kafka:
bootstrap:
server: "prerequisites-kafka:9092"
zookeeper:
server: "prerequisites-zookeeper:2181"
## For AWS MSK set this to a number larger than 1
# partitions: 3
# replicationFactor: 3
schemaregistry:
url: "<http://prerequisites-cp-schema-registry:8081>"
# type: AWS_GLUE
# glue:
# region: us-east-1
# registry: datahub
neo4j:
host: "prerequisites-neo4j-community:7474"
uri: "<bolt://prerequisites-neo4j-community>"
username: "neo4j"
password:
secretRef: neo4j-secrets
secretKey: neo4j-password
sql:
datasource:
host: "prerequisites-mysql:3306"
hostForMysqlClient: "prerequisites-mysql"
port: "3306"
url: "jdbc:<mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2>"
driver: "com.mysql.cj.jdbc.Driver"
username: "root"
password:
secretRef: mysql-secrets
secretKey: mysql-root-password
## Use below for usage of PostgreSQL instead of MySQL
# host: "prerequisites-postgresql:5432"
# hostForpostgresqlClient: "prerequisites-postgresql"
# port: "5432"
# url: "jdbc:<postgresql://prerequisites-postgresql:5432/datahub>"
# driver: "org.postgresql.Driver"
# username: "postgres"
# password:
# secretRef: postgresql-secrets
# secretKey: postgres-password
datahub:
gms:
port: "8080"
nodePort: "30001"
monitoring:
enablePrometheus: true
mae_consumer:
port: "9091"
nodePort: "30002"
appVersion: "1.0"
encryptionKey:
secretRef: "datahub-encryption-secrets"
secretKey: "encryption_key_secret"
# Set to false if you'd like to provide your own secret.
provisionSecret: true
managed_ingestion:
enabled: true
defaultCliVersion: "0.8.45"
metadata_service_authentication:
enabled: false
systemClientId: "__datahub_system"
systemClientSecret:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_signing_key"
tokenService:
signingKey:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_signing_key"
salt:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_salt"
# Set to false if you'd like to provide your own auth secrets
provisionSecrets: true
# hostAliases:
# - ip: "192.168.0.104"
# hostnames:
# - "broker"
# - "mysql"
# - "postgresql"
# - "elasticsearch"
# - "neo4j"
## Add below to enable SSL for kafka
# credentialsAndCertsSecrets:
# name: datahub-certs
# path: /mnt/datahub/certs
# secureEnv:
# ssl.key.password: datahub.linkedin.com.KeyPass
# ssl.keystore.password: datahub.linkedin.com.KeyStorePass
# ssl.truststore.password: datahub.linkedin.com.TrustStorePass
# kafkastore.ssl.truststore.password: datahub.linkedin.com.TrustStorePass
#
# springKafkaConfigurationOverrides:
# ssl.keystore.location: /mnt/datahub/certs/datahub.linkedin.com.keystore.jks
# ssl.truststore.location: /mnt/datahub/certs/datahub.linkedin.com.truststore.jks
# kafkastore.ssl.truststore.location: /mnt/datahub/certs/datahub.linkedin.com.truststore.jks
# security.protocol: SSL
# kafkastore.security.protocol: SSL
# ssl.keystore.type: JKS
# ssl.truststore.type: JKS
# ssl.protocol: TLS
# ssl.endpoint.identification.algorithm:
bulky-electrician-72362
10/10/2022, 12:36 PMdatahub-mce-consumer.enabled: true
datahub-mae-consumer.enabled: true
early-afternoon-71938
10/10/2022, 12:38 PMbulky-electrician-72362
10/10/2022, 2:53 PMearly-afternoon-71938
10/10/2022, 2:53 PMbulky-electrician-72362
10/10/2022, 4:30 PMearly-afternoon-71938
10/14/2022, 6:00 AMbulky-electrician-72362
10/14/2022, 6:34 AMearly-afternoon-71938
10/14/2022, 6:58 AMbulky-electrician-72362
10/14/2022, 7:00 AMAWS ACM public certificate
?early-afternoon-71938
10/14/2022, 8:04 AMbulky-electrician-72362
12/12/2022, 10:14 AMhave to repalce the <<certificate-arn>> with AWS ACM public certificate and redeploy datahubhere you said you have the certificate. do you have it or not? Can you access datahub only the http`s` is the problem? see more here: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/