Hi all, when trying to deploy prerequisites with h...
# all-things-deployment
b
Hi all, when trying to deploy prerequisites with helm charts I keep getting these errors. I already defined our persistenceVolumeClaim inside the chart with our default storageClassName what am I missing?
values.yaml:
Copy code
# Default configuration for pre-requisites to get you started
# Copy this file and update to the configuration of choice
elasticsearch:
  enabled: true   # set this to false, if you want to provide your own ES instance.
  replicas: 3
  minimumMasterNodes: 1
  # Set replicas to 1 and uncomment this to allow the instance to be scheduled on
  # a master node when deploying on a single node Minikube / Kind / etc cluster.
  # antiAffinity: "soft"

  # # If your running a single replica cluster add the following helm value
  # clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"

  # # Shrink default JVM heap.
  # esJavaOpts: "-Xmx128m -Xms128m"

  # # Allocate smaller chunks of memory per pod.
  resources:
    requests:
      cpu: "100m"
      memory: "100M"
    limits:
      cpu: "1000m"
      memory: "512M"

  # # Request smaller persistent volumes.
  volumeClaimTemplate:
    accessModes: ["ReadWriteOnce"]
    storageClassName: "rook-ceph-block"
    resources:
      requests:
        storage: 100M

# Official neo4j chart uses the Neo4j Enterprise Edition which requires a license
neo4j:
  enabled: false  # set this to true, if you have a license for the enterprise edition
  acceptLicenseAgreement: "yes"
  defaultDatabase: "graph.db"
  neo4jPassword: "datahub"
  # For better security, add password to neo4j-secrets k8s secret and uncomment below
  # existingPasswordSecret: neo4j-secrets
  core:
    standalone: true
  

# Deploys neo4j community version. Only supports single node
neo4j-community:
  enabled: false   # set this to false, if you have a license for the enterprise edition
  acceptLicenseAgreement: "yes"
  defaultDatabase: "graph.db"
  # For better security, add password to neo4j-secrets k8s secret and uncomment below
  existingPasswordSecret: neo4j-secrets

  # resources:
  #   requests:
  #     cpu: 0.5
  #     memory: "512M"
  #   limits:
  #     cpu: 1
  #     memory: "1000M"

  # # # Request smaller persistent volumes.
  # volumeClaimTemplate:
  #   accessModes: ["ReadWriteOnce"]
  #   storageClassName: "rook-ceph-block"
  #   resources:
  #     requests:
  #       storage: 200M

mysql:
  enabled: true
  auth:
    # For better security, add mysql-secrets k8s secret with mysql-root-password, mysql-replication-password and mysql-password
    existingSecret: mysql-secrets
  
  resources:
    requests:
      cpu: 0.5
      memory: "512M"
    limits:
      cpu: 1
      memory: "1000M"

  # # Request smaller persistent volumes.
  volumeClaimTemplate:
    accessModes: ["ReadWriteOnce"]
    storageClassName: "rook-ceph-block"
    resources:
      requests:
        storage: 200M

cp-helm-charts:
  # Schema registry is under the community license
  cp-schema-registry:
    enabled: true
    kafka:
      bootstrapServers: "prerequisites-kafka:9092"  # <<release-name>>-kafka:9092

  cp-kafka:
    enabled: false
  cp-zookeeper:
    enabled: false
  cp-kafka-rest:
    enabled: false
  cp-kafka-connect:
    enabled: false
  cp-ksql-server:
    enabled: false
  cp-control-center:
    enabled: false

# Bitnami version of Kafka that deploys open source Kafka <https://artifacthub.io/packages/helm/bitnami/kafka>
kafka:
  enabled: true
  resources:
    requests:
      cpu: 0.5
      memory: "512M"
    limits:
      cpu: 1
      memory: "1000M"

  # # Request smaller persistent volumes.
  volumeClaimTemplate:
    accessModes: ["ReadWriteOnce"]
    storageClassName: "rook-ceph-block"
    resources:
      requests:
        storage: 300M
b
have you set
persistence.enabled: true
, You need to set these configurations in values.yaml to use persistent volume with kafka
b
is it under the prequisites values.yaml? under which category?
can I overwrite it from the prequisites or I need to install the cp-helm-charts?
b
yes you can add it here for confluent kafka, which is disabled by default . https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml#:~:text=name%3E%3E%2Dkafka%3A[…]Dkafka%3A,-enabled%3A%20false for bitnami kafka you can add config in values.yaml here - you can find configuration for persistence here - https://artifacthub.io/packages/helm/bitnami/kafka?modal=values . Enable persistance and add appropriate value under kafka
b
So I don’t have to use the confluent kafka if I choose bitanmi kafka?
b
yes ..keep cp-kafka.enabled as false
b
so why is it asking persistence volume for mysql-0 and kafka-0 / zookeeper?
to set storageClass?
b
the screenshot that you shared shows error for kafka only
b
I’m getting this error also for mysql and zookeeper
Copy code
LAST SEEN   TYPE      REASON                            OBJECT                                                                                    MESSAGE
24m         Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-kafka-0                                          no persistent volumes available for this claim and no storage class is set
9m6s        Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-kafka-0                                          no persistent volumes available for this claim and no storage class is set
6s          Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-kafka-0                                          no persistent volumes available for this claim and no storage class is set
24m         Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-mysql-0                                          no persistent volumes available for this claim and no storage class is set
9m6s        Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-mysql-0                                          no persistent volumes available for this claim and no storage class is set
6s          Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-mysql-0                                          no persistent volumes available for this claim and no storage class is set
24m         Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-zookeeper-0                                      no persistent volumes available for this claim and no storage class is set
9m6s        Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-zookeeper-0                                      no persistent volumes available for this claim and no storage class is set
6s          Normal    FailedBinding                     persistentvolumeclaim/data-prerequisites-zookeeper-0                                      no persistent volumes available for this claim and no st
though I already set the storageClass inside the values.yaml
b
you need to remove below block that you have added under elaticsearch,mysql,kafka
Copy code
volumeClaimTemplate:
    accessModes: ["ReadWriteOnce"]
    storageClassName: "rook-ceph-block"
    resources:
      requests:
        storage: 200M
and instead add entry for persistence. for kafka it would look like
Copy code
kafka:
  enabled: true 
  persistence:
    enabled: true
    storageClass: "rook-ceph-block"
    accessModes:
      - ReadWriteOnce
    size: 200M
b
Ok, thanks
and mysql and zookeeper pods still on pending failing on bounding to storage
this is my values.yaml:
Copy code
# Default configuration for pre-requisites to get you started
# Copy this file and update to the configuration of choice
elasticsearch:
  enabled: true   # set this to false, if you want to provide your own ES instance.
  replicas: 3
  minimumMasterNodes: 1
  # Set replicas to 1 and uncomment this to allow the instance to be scheduled on
  # a master node when deploying on a single node Minikube / Kind / etc cluster.
  # antiAffinity: "soft"

  # # If your running a single replica cluster add the following helm value
  # clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"

  # # Shrink default JVM heap.
  # esJavaOpts: "-Xmx128m -Xms128m"

  # # Allocate smaller chunks of memory per pod.
  resources:
    requests:
      cpu: "0.1"
      memory: "100M"
    limits:
      cpu: "1000m"
      memory: "512M"

  # # Request smaller persistent volumes.
  # volumeClaimTemplate:
  #   accessModes: ["ReadWriteOnce"]
  #   storageClassName: "rook-ceph-block"
  #   resources:
  #     requests:
  #       storage: 100M

# Official neo4j chart uses the Neo4j Enterprise Edition which requires a license
neo4j:
  enabled: false  # set this to true, if you have a license for the enterprise edition
  acceptLicenseAgreement: "yes"
  defaultDatabase: "graph.db"
  neo4jPassword: "datahub"
  # For better security, add password to neo4j-secrets k8s secret and uncomment below
  # existingPasswordSecret: neo4j-secrets
  core:
    standalone: true
  

# Deploys neo4j community version. Only supports single node
neo4j-community:
  enabled: false   # set this to false, if you have a license for the enterprise edition
  acceptLicenseAgreement: "yes"
  defaultDatabase: "graph.db"
  # For better security, add password to neo4j-secrets k8s secret and uncomment below
  existingPasswordSecret: neo4j-secrets

  # resources:
  #   requests:
  #     cpu: 0.5
  #     memory: "512M"
  #   limits:
  #     cpu: 1
  #     memory: "1000M"

mysql:
  enabled: true
  auth:
    # For better security, add mysql-secrets k8s secret with mysql-root-password, mysql-replication-password and mysql-password
    existingSecret: mysql-secrets
  
  resources:
    requests:
      cpu: 0.5
      memory: "512M"
    limits:
      cpu: 1
      memory: "1000M"

cp-helm-charts:
  # Schema registry is under the community license

  cp-schema-registry:
    enabled: true
    kafka:
      bootstrapServers: "prerequisites-kafka:9092"  # <<release-name>>-kafka:9092

  cp-kafka:
    enabled: false
  cp-zookeeper:
    enabled: false
  cp-kafka-rest:
    enabled: false
  cp-kafka-connect:
    enabled: false
  cp-ksql-server:
    enabled: false
  cp-control-center:
    enabled: false

# Bitnami version of Kafka that deploys open source Kafka <https://artifacthub.io/packages/helm/bitnami/kafka>
kafka:
  enabled: true
  resources:
    requests:
      cpu: 0.5
      memory: "512M"
    limits:
      cpu: 1
      memory: "1000M"

  persistence:
    enabled: true
    storageClass: "rook-ceph-block"
    accessModes:
      - ReadWriteOnce
    size: 200M

  # # Request smaller persistent volumes.
  # volumeClaimTemplate:
  #   accessModes: ["ReadWriteOnce"]
  #   storageClassName: "rook-ceph-block"
  #   resources:
  #     requests:
  #       storage: 300M
when I added back the storage claim the elastic pods are back running
if set as false it use emptydir volume on localhost and you may looase data once container dies
b
but still no class was bound
where do I set the pvc for zookeeper?
Ok, I managed to set the zookeeper pv under kafka bitnami:
Copy code
kafka:
  enabled: true
  resources:
    requests:
      cpu: 0.5
      memory: "512M"
    limits:
      cpu: 1
      memory: "1000M"

  persistence:
    enabled: true
    storageClass: "rook-ceph-block"
    accessModes:
      - ReadWriteOnce
    size: 200M

  zookeeper:
    enabled: true
    persistence:
      enabled: true
      storageClass: "rook-ceph-block"
      accessModes: ["ReadWriteOnce"]
      size: 200M