```Hi, I am facing issues of pods being in pending...
# all-things-deployment
e
Copy code
Hi, I am facing issues of pods being in pending and not running in EKS cluster after following the K8 deployment guide, can you please help:--~# kubectl get pods
NAME                                                READY   STATUS             RESTARTS       AGE
elasticsearch-master-0                              0/1     Pending            0              64m
elasticsearch-master-1                              0/1     Pending            0              64m
elasticsearch-master-2                              0/1     Pending            0              64m
prerequisites-cp-schema-registry-6f4b5b894f-8lzvj   1/2     CrashLoopBackOff   15 (38s ago)   64m
prerequisites-kafka-0                               0/1     Pending            0              64m
prerequisites-mysql-0                               0/1     Pending            0              64m
prerequisites-neo4j-community-0                     0/1     Pending            0              64m
prerequisites-zookeeper-0                           0/1     Pending            0              64m
:~# kubectl describe pods prerequisites-cp-schema-registry-6f4b5b894f-8lzvj
Name:             prerequisites-cp-schema-registry-6f4b5b894f-8lzvj
Namespace:        default
Priority:         0
Service Account:  default
Node:             ip-10-0-1-247.ec2.internal/10.0.1.247
Start Time:       Thu, 06 Oct 2022 17:06:53 +0530
Labels:           app=cp-schema-registry
                  pod-template-hash=6f4b5b894f
                  release=prerequisites
Annotations:      <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
                  <http://prometheus.io/port|prometheus.io/port>: 5556
                  <http://prometheus.io/scrape|prometheus.io/scrape>: true
Status:           Running
IP:               10.0.1.33
IPs:
  IP:           10.0.1.33
Controlled By:  ReplicaSet/prerequisites-cp-schema-registry-6f4b5b894f
Containers:
  prometheus-jmx-exporter:
    Container ID:  <docker://d106dfe9388bd4e0009227c3d68bb83bc81bcdb530f0d2f3ad4a94dee19df75>1
    Image:         solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143
    Image ID:      <docker-pullable://solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143>
    Port:          5556/TCP
    Host Port:     0/TCP
    Command:
      java
      -XX:+UnlockExperimentalVMOptions
      -XX:+UseCGroupMemoryLimitForHeap
      -XX:MaxRAMFraction=1
      -XshowSettings:vm
      -jar
      jmx_prometheus_httpserver.jar
      5556
      /etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
    State:          Running
      Started:      Thu, 06 Oct 2022 17:06:54 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/jmx-schema-registry from jmx-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbgmf (ro)
  cp-schema-registry-server:
    Container ID:   <docker://9effa12c8c8cd8a6585b56155f72b0a1e51b79e3b3ce31473c5cc3dbf4863bb>6
    Image:          confluentinc/cp-schema-registry:6.0.1
    Image ID:       <docker-pullable://confluentinc/cp-schema-registry@sha256:b52e16cf232e3c9acd677ae8944de813e16fa541a367d9f805b300c5d2be1a1f>
    Ports:          8081/TCP, 5555/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 06 Oct 2022 18:27:55 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 06 Oct 2022 18:22:00 +0530
      Finished:     Thu, 06 Oct 2022 18:22:45 +0530
    Ready:          True
    Restart Count:  18
    Environment:
      SCHEMA_REGISTRY_HOST_NAME:                      (v1:status.podIP)
      SCHEMA_REGISTRY_LISTENERS:                     <http://0.0.0.0:8081>
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS:  prerequisites-kafka:9092
      SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID:           prerequisites
      SCHEMA_REGISTRY_MASTER_ELIGIBILITY:            true
      SCHEMA_REGISTRY_HEAP_OPTS:                     -Xms512M -Xmx512M
      JMX_PORT:                                      5555
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xbgmf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  jmx-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prerequisites-cp-schema-registry-jmx-configmap
    Optional:  false
  kube-api-access-xbgmf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Normal   Pulled   6m29s (x18 over 81m)  kubelet  Container image "confluentinc/cp-schema-registry:6.0.1" already present on machine
  Warning  BackOff  91s (x307 over 80m)   kubelet  Back-off restarting failed container
can someone help me with this?
@bulky-electrician-72362 can you please look at the above error
b
could you check the logs on it ?
e
kubectl logs prerequisites-cp-schema-registry-6f4b5b894f-l8xrv Defaulted container "prometheus-jmx-exporter" out of: prometheus-jmx-exporter, cp-schema-registry-server VM settings: Max. Heap Size (Estimated): 6.91G Ergonomics Machine Class: server Using VM: OpenJDK 64-Bit Server VM
@bulky-electrician-72362
b
could you add
-c cp-schema-registry-server
to the command ?
e
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1665065604536, tries=1, nextAllowedTryMs=1665065604637) timed out at 1665065604537 after 1 attempt(s) Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1665065614534, tries=1, nextAllowedTryMs=1665065614635) timed out at 1665065614535 after 1 attempt(s) at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149) at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150) Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1665065614534, tries=1, nextAllowedTryMs=1665065614635) timed out at 1665065614535 after 1 attempt(s) Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... [main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
kubectl logs prerequisites-cp-schema-registry-6f4b5b894f-l8xrv -c cp-schema-registry-server ===> User uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) ===> Configuring ... ===> Running preflight checks ... ===> Check if Kafka is healthy ... [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [prerequisites-kafka:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 6.0.1-ccs [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 9c1fbb3db1e0d69d [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1665065574523 [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1665065604536, tries=1, nextAllowedTryMs=1665065604637) timed out at 1665065604537 after 1 attempt(s) Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1665065614534, tries=1, nextAllowedTryMs=1665065614635) timed out at 1665065614535 after 1 attempt(s) at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149) at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150) Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1665065614534, tries=1, nextAllowedTryMs=1665065614635) timed out at 1665065614535 after 1 attempt(s) Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (prerequisites-kafka/172.20.201.84:9092) could not be established. Broker may not be available. [main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... [main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
@bulky-electrician-72362
b
check why kafka is pending
describe the pod
e
@bulky-electrician-72362 error: didn't find available persistent volumes to bind.
b
check your PVC
maybe your k8s cluster couldnt provision a disk for you
e
@bulky-electrician-72362 yes that correct, but i have followed the aws guide to install EBS csi driver but it still fails to create PV with error Not Authorized to perform sts:AssumeRoleWithWebIdentity
b
are you using EKS?
check your nodes has the right role to access aws resources
e
I created new role with all the recommended policy attached still no use, it fails at auth
@bulky-electrician-72362 After deployment i could get these pods running, but mae-consumer and mce-consumer are missing NAME READY STATUS RESTARTS AGE datahub-acryl-datahub-actions-6755f74bf4-2mslj 1/1 Running 0 26m datahub-datahub-frontend-9f55dbb4-qhb8z 1/1 Running 0 26m datahub-datahub-gms-88794b-knlbq 1/1 Running 0 26m elasticsearch-master-0 1/1 Running 0 3h32m elasticsearch-master-1 0/1 Pending 0 19m elasticsearch-master-2 1/1 Running 0 19m prerequisites-cp-schema-registry-cf79bfccf-c9jwp 2/2 Running 0 26m prerequisites-kafka-0 1/1 Running 0 26m prerequisites-mysql-0 1/1 Running 0 26m prerequisites-neo4j-community-0 1/1 Running 0 26m prerequisites-zookeeper-0 1/1 Running 0 26m
b
you need to set sandaloneconsumers to true in the values yaml
e
datahub_standalone_consumers_enabled: true @bulky-electrician-72362
b
are the replicas set to non
0
?
e
@bulky-electrician-72362 for which sevice??
Copy code
# Values to start up datahub after starting up the datahub-prerequisites chart with "prerequisites" release name
# Copy this chart and change configuration as needed.
datahub-gms:
  enabled: true
  image:
    repository: linkedin/datahub-gms
    tag: "v0.8.45"

datahub-frontend:
  enabled: true
  image:
    repository: linkedin/datahub-frontend-react
    tag: "v0.8.45"
  # Set up ingress to expose react front-end
  ingress:
    enabled: false

acryl-datahub-actions:
  enabled: true
  image:
    repository: acryldata/datahub-actions
    tag: "v0.0.7"
  resources:
    limits:
      memory: 512Mi
    requests:
      cpu: 300m
      memory: 256Mi

datahub-mae-consumer:
  image:
    repository: linkedin/datahub-mae-consumer
    tag: "v0.8.45"

datahub-mce-consumer:
  image:
    repository: linkedin/datahub-mce-consumer
    tag: "v0.8.45"

datahub-ingestion-cron:
  enabled: false
  image:
    repository: acryldata/datahub-ingestion
    tag: "v0.8.45"

elasticsearchSetupJob:
  enabled: true
  image:
    repository: linkedin/datahub-elasticsearch-setup
    tag: "v0.8.44"
  podSecurityContext:
    fsGroup: 1000
  securityContext:
    runAsUser: 1000
  podAnnotations: {}

kafkaSetupJob:
  enabled: true
  image:
    repository: linkedin/datahub-kafka-setup
    tag: "v0.8.45"
  podSecurityContext:
    fsGroup: 1000
  securityContext:
    runAsUser: 1000
  podAnnotations: {}

mysqlSetupJob:
  enabled: true
  image:
    repository: acryldata/datahub-mysql-setup
    tag: "v0.8.45"
  podSecurityContext:
    fsGroup: 1000
  securityContext:
    runAsUser: 1000
  podAnnotations: {}

postgresqlSetupJob:
  enabled: false
  image:
    repository: acryldata/datahub-postgres-setup
    tag: "v0.8.45"
  podSecurityContext:
    fsGroup: 1000
  securityContext:
    runAsUser: 1000
  podAnnotations: {}

datahubUpgrade:
  enabled: true
  image:
    repository: acryldata/datahub-upgrade
    tag: "v0.8.45"
  batchSize: 1000
  batchDelayMs: 100
  noCodeDataMigration:
    sqlDbType: "MYSQL"
    # sqlDbType: "POSTGRES"
  podSecurityContext: {}
    # fsGroup: 1000
  securityContext: {}
    # runAsUser: 1000
  podAnnotations: {}
  restoreIndices:
    resources:
      limits:
        cpu: 500m
        memory: 512Mi
      requests:
        cpu: 300m
        memory: 256Mi

prometheus-kafka-exporter:
  enabled: false
  kafkaServer:
  - prerequisites-kafka:9092  # <<release-name>>-kafka:9092
  # Sarama logging
  sarama:
    logEnabled: true
  prometheus:
    serviceMonitor:
      enabled: true
      namespace: monitoring
      interval: "30s"
      # If serviceMonitor is enabled and you want prometheus to automatically register
      # target using serviceMonitor, add additionalLabels with prometheus release name
      # e.g. If you have deployed kube-prometheus-stack with release name kube-prometheus
      # then additionalLabels will be
      # additionalLabels:
      #   release: kube-prometheus
      additionalLabels: {}
      targetLabels: []

global:
  graph_service_impl: neo4j
  datahub_analytics_enabled: true
  datahub_standalone_consumers_enabled: true

  elasticsearch:
    host: "elasticsearch-master"
    port: "9200"
    skipcheck: "false"
    insecure: "false"

  kafka:
    bootstrap:
      server: "prerequisites-kafka:9092"
    zookeeper:
      server: "prerequisites-zookeeper:2181"
    ## For AWS MSK set this to a number larger than 1
    # partitions: 3
    # replicationFactor: 3
    schemaregistry:
      url: "<http://prerequisites-cp-schema-registry:8081>"
      # type: AWS_GLUE
      # glue:
      #   region: us-east-1
      #   registry: datahub

  neo4j:
    host: "prerequisites-neo4j-community:7474"
    uri: "<bolt://prerequisites-neo4j-community>"
    username: "neo4j"
    password:
      secretRef: neo4j-secrets
      secretKey: neo4j-password

  sql:
    datasource:
      host: "prerequisites-mysql:3306"
      hostForMysqlClient: "prerequisites-mysql"
      port: "3306"
      url: "jdbc:<mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2>"
      driver: "com.mysql.cj.jdbc.Driver"
      username: "root"
      password:
        secretRef: mysql-secrets
        secretKey: mysql-root-password

      ## Use below for usage of PostgreSQL instead of MySQL
      # host: "prerequisites-postgresql:5432"
      # hostForpostgresqlClient: "prerequisites-postgresql"
      # port: "5432"
      # url: "jdbc:<postgresql://prerequisites-postgresql:5432/datahub>"
      # driver: "org.postgresql.Driver"
      # username: "postgres"
      # password:
      #   secretRef: postgresql-secrets
      #   secretKey: postgres-password

  datahub:
    gms:
      port: "8080"
      nodePort: "30001"

    monitoring:
      enablePrometheus: true

    mae_consumer:
      port: "9091"
      nodePort: "30002"

    appVersion: "1.0"

    encryptionKey:
      secretRef: "datahub-encryption-secrets"
      secretKey: "encryption_key_secret"
      # Set to false if you'd like to provide your own secret.
      provisionSecret: true

    managed_ingestion:
      enabled: true
      defaultCliVersion: "0.8.45"

    metadata_service_authentication:
      enabled: false
      systemClientId: "__datahub_system"
      systemClientSecret:
        secretRef: "datahub-auth-secrets"
        secretKey: "token_service_signing_key"
      tokenService:
        signingKey:
          secretRef: "datahub-auth-secrets"
          secretKey: "token_service_signing_key"
        salt:
          secretRef: "datahub-auth-secrets"
          secretKey: "token_service_salt"
      # Set to false if you'd like to provide your own auth secrets
      provisionSecrets: true

#  hostAliases:
#    - ip: "192.168.0.104"
#      hostnames:
#        - "broker"
#        - "mysql"
#        - "postgresql"
#        - "elasticsearch"
#        - "neo4j"

## Add below to enable SSL for kafka
#  credentialsAndCertsSecrets:
#    name: datahub-certs
#    path: /mnt/datahub/certs
#    secureEnv:
#      ssl.key.password: datahub.linkedin.com.KeyPass
#      ssl.keystore.password: datahub.linkedin.com.KeyStorePass
#      ssl.truststore.password: datahub.linkedin.com.TrustStorePass
#      kafkastore.ssl.truststore.password: datahub.linkedin.com.TrustStorePass
#
#  springKafkaConfigurationOverrides:
#    ssl.keystore.location: /mnt/datahub/certs/datahub.linkedin.com.keystore.jks
#    ssl.truststore.location: /mnt/datahub/certs/datahub.linkedin.com.truststore.jks
#    kafkastore.ssl.truststore.location: /mnt/datahub/certs/datahub.linkedin.com.truststore.jks
#    security.protocol: SSL
#    kafkastore.security.protocol: SSL
#    ssl.keystore.type: JKS
#    ssl.truststore.type: JKS
#    ssl.protocol: TLS
#    ssl.endpoint.identification.algorithm:
@bulky-electrician-72362 PFA the values.yaml
b
datahub-mce-consumer.enabled: true
datahub-mae-consumer.enabled: true
try adding these
e
@bulky-electrician-72362 got it by adding the datahub_standalone_consumers_enabled: true NAME READY STATUS RESTARTS AGE datahub-acryl-datahub-actions-6755f74bf4-lsjrp 1/1 Running 0 7m34s datahub-datahub-frontend-9f55dbb4-mmrln 1/1 Running 0 7m35s datahub-datahub-gms-77b65bdc55-gwqvf 1/1 Running 0 7m35s datahub-datahub-mae-consumer-7769d7bc6d-5mbwt 1/1 Running 0 7m34s datahub-datahub-mce-consumer-79f7877f5f-vn54j 1/1 Running 0 7m35s datahub-datahub-upgrade-job-nvcd2 0/1 Completed 0 4m48s datahub-datahub-upgrade-job-tfgl4 0/1 Error 0 7m30s datahub-elasticsearch-setup-job-k6sxm 0/1 Completed 0 9m56s datahub-kafka-setup-job-n4jlp 0/1 Completed 0 9m48s datahub-mysql-setup-job-nqstf 0/1 Completed 0 7m43s elasticsearch-master-0 1/1 Running 0 4h25m elasticsearch-master-1 0/1 Pending 0 72m elasticsearch-master-2 1/1 Running 0 72m prerequisites-cp-schema-registry-cf79bfccf-c9jwp 2/2 Running 0 79m prerequisites-kafka-0 1/1 Running 0 79m prerequisites-mysql-0 1/1 Running 0 79m prerequisites-neo4j-community-0 1/1 Running 0 79m prerequisites-zookeeper-0 1/1 Running 0 79m
b
👍
e
@bulky-electrician-72362 thank you for your guidance....
b
glad it worked out
e
@bulky-electrician-72362 is there any alternative to setup ingress, currently i am facing issues with getting a AWS AM certificate
b
You can use any ingress controller. Just point it to the frontend service.
e
@bulky-electrician-72362 datahub-frontend: enabled: true image: repository: linkedin/datahub-frontend-react tag: "latest" ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/certificate-arn: <<certificate-arn>> alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0 alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' hosts: - host: <<host-name>> redirectPaths: - path: /* name: ssl-redirect port: use-annotation paths: - /*
@bulky-electrician-72362 as per the deployement doc for AWS , i have to repalce the <<certificate-arn>> with AWS ACM public certificate and redeploy datahub, I am stuck here
b
do you have
AWS ACM public certificate
?
what are you experiencing as problem?
e
@bulky-electrician-72362 no i donot have a certificate
@bulky-electrician-72362 is there any other way to setup ingress to datahub, AWS Public certificate is not getting issued and failed many times. please suggest
b
hey @early-afternoon-71938, do you have the certificate on ACM:
have to repalce the <<certificate-arn>> with AWS ACM public certificate and redeploy datahub
here you said you have the certificate. do you have it or not? Can you access datahub only the http`s` is the problem? see more here: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
149 Views