Hello Team , Back with another question. <https://...
# troubleshoot
s
Hello Team , Back with another question. https://github.com/acryldata/datahub-helm/issues/25 Do we have any workaround for this issue that is raised ? we are now currently stuck with SSO implementation for this issue . cc : @crooked-market-47728
e
Hey! We recently identified and fixed the issue! Can you try with the latest version?
s
Thank you @early-lamp-41924. We will try and let you know about this .
c
Hey @early-lamp-41924! Now don’t get any error on Kafka, but when i perform an update or even a new installation gets timeout
This is our values.yaml Is like the default but with Google Auth and with an AWS Load Balancer everything from the documentation
Copy code
# Values to start up datahub after starting up the datahub-prerequisites chart with "prerequisites" release name
# Copy this chart and change configuration as needed.
datahub-gms:
  enabled: true
  image:
    repository: linkedin/datahub-gms
    tag: "v0.8.11"

datahub-frontend:
  enabled: true
  image:
    repository: linkedin/datahub-frontend-react
    tag: "v0.8.11"
  extraEnvs:
    - name: AUTH_OIDC_ENABLED
      value: true
    - name: AUTH_OIDC_CLIENT_ID
      value: XXXXXXXXXXX
    - name: AUTH_OIDC_CLIENT_SECRET
      value: XXXXXXXXXXX
    - name: AUTH_OIDC_DISCOVERY_URI
      value: <https://accounts.google.com/.well-known/openid-configuration> 
    - name: AUTH_OIDC_BASE_URL
      value: <http://datahub.de.dev.kavak.services:9002/>
    - name: AUTH_OIDC_SCOP
      value: "openid profile email"
    - name: AUTH_OIDC_USER_NAME_CLAIM
      value: "email"
    - name: AUTH_OIDC_USER_NAME_CLAIM_REGEX
      value: ([^@]+)

  # Set up ingress to expose react front-end
  ingress:
    enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: alb
      <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internet-facing
      <http://alb.ingress.kubernetes.io/target-type|alb.ingress.kubernetes.io/target-type>: instance
      <http://alb.ingress.kubernetes.io/certificate-arn|alb.ingress.kubernetes.io/certificate-arn>: arn:aws:acm:us-west-2:010307922527:certificate/XXXXXXXXXX
      <http://alb.ingress.kubernetes.io/inbound-cidrs|alb.ingress.kubernetes.io/inbound-cidrs>: 0.0.0.0/0
      <http://alb.ingress.kubernetes.io/listen-ports|alb.ingress.kubernetes.io/listen-ports>: '[{"HTTP": 80}, {"HTTPS":443}]'
      <http://alb.ingress.kubernetes.io/actions.ssl-redirect|alb.ingress.kubernetes.io/actions.ssl-redirect>: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    hosts:
      - host: datahub.de.dev.kavak.services
        redirectPaths:
          - path: /*
            name: ssl-redirect
            port: use-annotation
        paths:
          - /*

datahub-mae-consumer:
  image:
    repository: linkedin/datahub-mae-consumer
    tag: "v0.8.11"

datahub-mce-consumer:
  image:
    repository: linkedin/datahub-mce-consumer
    tag: "v0.8.11"

datahub-ingestion-cron:
  enabled: false
  image:
    repository: linkedin/datahub-ingestion
    tag: "v0.8.11"

elasticsearchSetupJob:
  enabled: true
  image:
    repository: linkedin/datahub-elasticsearch-setup
    tag: "v0.8.11"

kafkaSetupJob:
  enabled: true
  image:
    repository: linkedin/datahub-kafka-setup
    tag: "v0.8.11"

mysqlSetupJob:
  enabled: true
  image:
    repository: acryldata/datahub-mysql-setup
    tag: "v0.8.11"

datahubUpgrade:
  enabled: true
  image:
    repository: acryldata/datahub-upgrade
    tag: "v0.8.11"
  noCodeDataMigration:
    sqlDbType: "MYSQL"

global:
  graph_service_impl: neo4j
  datahub_analytics_enabled: true
  datahub_standalone_consumers_enabled: false

  elasticsearch:
    host: "elasticsearch-master"
    port: "9200"

  kafka:
    bootstrap:
      server: "prerequisites-kafka:9092"
    zookeeper:
      server: "prerequisites-zookeeper:2181"
    schemaregistry:
      url: "<http://prerequisites-cp-schema-registry:8081>"

  neo4j:
    host: "prerequisites-neo4j-community:7474"
    uri: "<bolt://prerequisites-neo4j-community>"
    username: "neo4j"
    password:
      secretRef: neo4j-secrets
      secretKey: neo4j-password

  sql:
    datasource:
      host: "prerequisites-mysql:3306"
      hostForMysqlClient: "prerequisites-mysql"
      port: "3306"
      url: "jdbc:<mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8&enabledTLSProtocols=TLSv1.2>"
      driver: "com.mysql.jdbc.Driver"
      username: "root"
      password:
        secretRef: mysql-secrets
        secretKey: mysql-root-password

  datahub:
    gms:
      port: "8080"
    mae_consumer:
      port: "9091"
    appVersion: "1.0"
e
@crooked-market-47728 Did the same setup work before? or is this the first time running it through k8s?
c
It has been working all the time, before the issue with kafka.
e
Can you try setting global.kafka.partitions to 1 in the values.yaml file?
Testing locally as well
c
tried:
Copy code
global:
  graph_service_impl: neo4j
  datahub_analytics_enabled: true
  datahub_standalone_consumers_enabled: false
  kafka.partitions: 1
And
Copy code
kafka:
    bootstrap:
      server: "prerequisites-kafka:9092"
    zookeeper:
      server: "prerequisites-zookeeper:2181"
    partitions: 3
    # replicationFactor: 3
    schemaregistry:
      url: "<http://prerequisites-cp-schema-registry:8081>"
And same issue with timeout (in clean installation)
e
Confirmed the issue. For now, can you set partitions to 1 and replicationFactor to 1?
I’ll push out the update
c
Now it works!
Thanks @early-lamp-41924!
e
Sorry about that. Changed some things to fix AWS MSK setup, but caused the default setting to fail. Working on an actual fix now
s
thanks a lot @early-lamp-41924
c
Hey @early-lamp-41924 last one (i promise 🙏) I configured Google OIDC with this steps for Kubernetes but its fails whit this error:
Copy code
Error: UPGRADE FAILED: failed to create resource: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|geEvent_v1"},{"name":"AUTH_OIDC_ENABLED","value":true},{"name":"AUTH_OIDC_CLIENT_ID","value":"679178|...
I tried with secrets and without secrets in k8s, my values.yaml is configured like this:
Copy code
datahub-frontend:
  enabled: true
  image:
    repository: linkedin/datahub-frontend-react
    tag: "v0.8.11"
  extraEnvs:
    - name: AUTH_OIDC_ENABLED
      value: true
    - name: AUTH_OIDC_CLIENT_ID
      value: XXXXXXXXXXXXXX
    - name: AUTH_OIDC_CLIENT_SECRET
      value: XXXXXXXXXXXXXX
    - name: AUTH_OIDC_DISCOVERY_URI
      value: <https://accounts.google.com/.well-known/openid-configuration>
    - name: AUTH_OIDC_BASE_URL
      value: <http://PUBLIC-URL:9002/>
    - name: AUTH_OIDC_SCOP
      value: "openid profile email"
    - name: AUTH_OIDC_USER_NAME_CLAIM
      value: "email"
    - name: AUTH_OIDC_USER_NAME_CLAIM_REGEX
      value: ([^@]+)
To create the Google Auth OIDC i used this steps
Could you please give some idea where i have to see or with team could help me? Thanks!
e
^ cc @big-carpet-38439
b
@crooked-market-47728 What error are you seeing?
Can you try removing the slash at the end of `
Copy code
<http://PUBLIC-URL:9002/>
`
c
Hey @big-carpet-38439! I removed the final
/
but same error!
Copy code
Error: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|geEvent_v1"},{"name":"AUTH_OIDC_ENABLED","value":true},{"name":"AUTH_OIDC_CLIENT_ID","value":"679178|...
I attach you my values.yaml but is like the default, but with OIDC and ELB, everything from de documentation
Hey @big-carpet-38439 did you see my message? I have been looking other person is having the same issue with more info, https://github.com/acryldata/datahub-helm/issues/24
b
Hey Gabe!
Yes I do not see any issues in the configuration itself
@early-lamp-41924 These OIDC configs look good to me. I think this seems to be a Helm issue?
e
Can you try putting true in quotes?
value: "true"
c
Confirmed!!! That was the issue! https://datahubproject.io/docs/how/auth/sso/configure-oidc-react/#2-configure-datahub-frontend-server If you can update the documentation in Kubernetes, will be nice!
Thanks!!!
s
Thank you team datahub for helping us out with this issue .thankyou
b
Oh man!! Awesome!
🎉 2
s
Gentle reminder to please post large blocks of code/stack trace in Slack message threads - it’s a HUGE help for the Core Team to keep track of which questions are still unaddressed across our various support channels!
sure 1
i