https://datahubproject.io logo
Docs
Join the conversationJoin Slack
Channels
acryl-omnisend
advice-data-governance
advice-metadata-modeling
all-things-datahub-in-windows
all-things-deployment
announcements
authentication-authorization
chatter
column-level-lineage
contribute
contribute-datahub-blog
data-council-workshop-2023
datahub-soda-test
demo-slack-notifications
design-business-glossary
design-data-product-entity
design-data-quality
design-datahub-documentation
design-dataset-access-requests
design-dataset-joins
feature-requests
flyte-datahub-integration
getting-started
github-activities
help-
i18n-community-contribution
ingestion
integration-alteryx-datahub
integration-azure-datahub
integration-dagster-datahub
integration-databricks-datahub
integration-datastudio-datahub
integration-iceberg-datahub
integration-powerbi-datahub
integration-prefect-datahub
integration-protobuf
integration-tableau-datahub
integration-vertica-datahub
introduce-yourself
jobs
metadata-day22-hackathon
muti-tenant-deployment
office-hours
openapi
plugins
show-and-tell
talk-data-product-management
troubleshoot
ui
Powered by Linen
all-things-deployment
  • c

    creamy-van-28626

    03/13/2023, 2:44 PM
    Hi team How do you guys have setup multi-tenancy in datahub ?
    a
    • 2
    • 5
  • f

    flat-painter-78331

    03/14/2023, 6:07 AM
    Hi team, Good Day! Can I know if there are any replacement metrics i can use for these set of metrics:
    metrics_com_linkedin_metadata_kafka_MetadataChangeLogProcessor_maeProcess_Mean
    metrics_com_linkedin_metadata_kafka_MetadataAuditEventsProcessor_maeProcess_75thPercentile
    metrics_com_linkedin_metadata_kafka_MetadataAuditEventsProcessor_maeProcess_95thPercentile
    Thanks!
    a
    • 2
    • 2
  • c

    creamy-van-28626

    03/14/2023, 2:35 PM
    Hi team I have written an action pipeline where I want to capture only MCP events but when I am running pipeline output is not coming but after stopping it it’s showing events processed count
    a
    • 2
    • 8
  • b

    best-umbrella-88325

    03/14/2023, 4:02 PM
    Hello community! We're trying to setup Datahub on AWS using AWS Elasticsearch and AWS RDS as persistence layer. When the elastic-search-setup is getting executed, we are facing this error
    nachiket@LAPTOP-XXXXX:~$ kubectl logs -f datahub-elasticsearch-setup-job--1-dmwn4 -n datahub
    2023/03/14 15:26:27 Waiting for: <https://XXXXXX.us-west-1.es.amazonaws.com:443>
    2023/03/14 15:26:32 Received 200 from <https://XXXXXXX.us-west-1.es.amazonaws.com:443>
    going to use protocol: https
    going to use elastic headers based on username and password
    not using any prefix
    
     datahub_analytics_enabled: true
    
    >>> GET _ilm/policy/datahub_usage_event_policy response code is 000
    >>> failed to GET _ilm/policy/datahub_usage_event_policy ! -> exiting
    Can someone help us with this please? We are using v0.10.0 and chart version 0.2.154 Thanks in advance
    a
    a
    • 3
    • 6
  • l

    limited-forest-73733

    03/14/2023, 5:11 PM
    Hey! I am integrating great-expectations with datahub, but in the action_list i want to add kafka bootstrap and schema_registry url instead of datahub gms endpoint. I didn’t find any docs for it. Can anyone please help me out
    a
    b
    m
    • 4
    • 5
  • h

    happy-camera-26449

    03/14/2023, 5:15 PM
    the application is initially getting redirected to https (discovery url). it only works when i manually change https to http, do we add a certificate in one of the containers, and how and which container? Thanks in advance!
    a
    • 2
    • 1
  • v

    victorious-spoon-76468

    03/14/2023, 6:38 PM
    Hey all! I’m currently following this guide to deploy datahub with AWS OpenSearch. Our datahub is deployed on EKS and we have attached the serviceAccount with the pods and the AWS IAM role is also whitelisted in the opensearch domain resource policy with proper permission, but
    elasticsearch-setup-job
    keeps returning the following error:
    2023/03/14 18:16:12 Received 403 from https://<my_opensearch_endpoint>.<http://us-east-1.es.amazonaws.com:443|us-east-1.es.amazonaws.com:443>. Sleeping 1s
    2023/03/14 18:16:13 Received 403 from https://<my_opensearch_endpoint>.<http://us-east-1.es.amazonaws.com:443|us-east-1.es.amazonaws.com:443>. Sleeping 1s
    These are the chart values for ES:
    elasticsearchSetupJob:
          enabled: true
          image:
            repository: linkedin/datahub-elasticsearch-setup
            tag: "v0.9.6"
          podSecurityContext:
            fsGroup: 1000
          securityContext:
            runAsUser: 1000
          podAnnotations: {}
          extraEnvs:
            - name: USE_AWS_ELASTICSEARCH
              value: "true"
            - name: OPENSEARCH_USE_AWS_IAM_AUTH
              value: "true"
          serviceAccount: "datahub"
    
    global:
          elasticsearch:
            host: "<my_opensearch_endpoint>.<http://us-east-1.es.amazonaws.com|us-east-1.es.amazonaws.com>"
            port: "443"
            useSSL: true
            region: "us-east-1"
            index:
              enableMappingsReindex: true
              enableSettingsReindex: true
              upgrade:
                cloneIndices: true
                allowDocCountMismatch: false
    Can someone help us with this, please?
    a
    • 2
    • 5
  • f

    fierce-finland-15121

    03/15/2023, 1:39 AM
    [solved] Hey all! I am attempting to deploy DataHub using the helm chart and getting an interesting error. The preinstall seems to succeed and then I move over to helm install datahub datahub/datahub and I get the following error after about 10 minutes
    Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
    Looking around I saw some threads that implied the reason for this is likely a failure to start some pods during the prerequisite step. I ran
    kubectl get pods
    and found one that was in fact in an error state
    NAME                                      READY   STATUS                       RESTARTS        AGE
    datahub-datahub-system-update-job-wntk7   0/1     CreateContainerConfigError   0               5m14s
    and when I describe the pod I see this warning event with the following description
    Error: secret "datahub-auth-secrets" not found
    According to the default values it seems that this secret should automatically be created in the namespace. Here is the relevant portion.
    # Set to false if you'd like to provide your own auth secrets
          provisionSecrets:
            enabled: true
            autoGenerate: true
          # Only specify if autoGenerate set to false
          #  secretValues:
          #    secret: <secret value>
          #    signingKey: <signing key value>
          #    salt: <salt value>
    I am guessing I can fix this by generating these secrets myself, but in reality I don't really care what they look like so I would prefer if the helm chart could just take care of it. Is there something that I am potentially missing or a reason why this secret isn't being created? As a followup if I need to create these secrets myself, what format/values are these expecting? Salt I can infer is just a 20 or so char salt string, but secret and signing key seem vague to me. Is the signing key supposed to be an rsa key or is it just any random string?
    c
    • 2
    • 3
  • b

    bumpy-activity-74405

    03/15/2023, 9:46 AM
    Hi, I have a few questions regarding upgrading to
    v0.10.0
    (I am not using the provided helm charts): 1. Is it ok to upgrade to this version straight from
    v0.8.44
    ? 2. Is it ok to start
    datahub-upgrade
    while the old version is running or should I stop the old instance first? 3. I see a tag
    v0.10.6
    on dockerhub for the
    datahub-upgrade
    image, should I use that one or stick with
    v0.10.0
    ?
    a
    • 2
    • 10
  • a

    agreeable-belgium-70840

    03/15/2023, 1:04 PM
    Hello, I am trying to deploy datahub via docker compose. I am doing
    datahub docker quickstart
    But zookeeper is failing with the error below. What can I do?
    [2023-03-15 10:35:38,353] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,364] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,364] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,364] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,365] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,367] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
    [2023-03-15 10:35:38,368] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
    [2023-03-15 10:35:38,368] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
    [2023-03-15 10:35:38,368] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
    [2023-03-15 10:35:38,370] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil)
    [2023-03-15 10:35:38,370] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,371] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,371] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,371] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,371] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
    [2023-03-15 10:35:38,371] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
    [2023-03-15 10:35:38,389] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@46fa7c39 (org.apache.zookeeper.server.ServerMetrics)
    [2023-03-15 10:35:38,394] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
    [2023-03-15 10:35:38,395] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
    org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Cannot write to data directory /var/lib/zookeeper/log/version-2
    	at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:140)
    	at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:137)
    	at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:112)
    	at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:67)
    	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:140)
    	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:90)
    a
    • 2
    • 1
  • s

    square-solstice-69079

    03/15/2023, 1:46 PM
    I'm trying to upgrade from an older version from late 2022 to the newest vesion. See details here: https://datahubspace.slack.com/archives/C029A3M079U/p1678883016797609 The status after upgrade is that I'm not able to log in, even with AUTH_OIDC_ENABLED=true or false. With SSO enabled I get the error
    Failed to perform post authentication steps. Error message: Failed to provision user with urn urn:li:corpuser:myname@domain.com.
    a
    a
    e
    • 4
    • 28
  • r

    rich-salesmen-77587

    03/15/2023, 2:57 PM
    Hi Everyone .. When are we having the Data Product feature in the Datahub roadmap?
    a
    • 2
    • 1
  • c

    cuddly-arm-8412

    03/16/2023, 1:53 AM
    hi,team ,when i datahub docker quickstart ,my docker run successfully.but it prompts error in gms Caused by: org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=index_not_found_exception, reason=no such index [datahubpolicyindex_v2]]
    d
    a
    g
    • 4
    • 18
  • g

    great-monkey-52307

    03/16/2023, 10:03 PM
    Hi Team, Can anyone provide suggestions/best practices or any modifications required to below datahub application Infrastructure which we have planned to have in production 1. Datahub application deployed in AKS cluster with below configuration , please let me know if applications works/scales good with below machines 3 worker node cluster Standard_D4s_v3 -> vCPUs (4) , Memory (16 GiB) , OS disk Size 128GB , Resource (temporary) disk size (32 GiB) , Combined IOPS (8000) , Uncached Disk IOPS (6400) 2. SQL server pod is connected to Postgres SQL server as backend with below configuration Compute Size : 4 v Cores, 16 GiB Memory , 6400 max IOPS , Storage : 128 GB 3. Using Application gateway as Ingress for the Datahub App with SSO integration with Azure Active directory Please let me know if the configuration / setup is as expected for the Datahub application to proceed further
  • g

    great-toddler-2251

    03/17/2023, 12:01 AM
    Hi folks. We have OIDC set up for the frontend. It’s working - we can log in via Okta. We’re having a problem with the JIT provisioning of groups though. The group the user belongs to is not getting created in DataHub. We have the following set in the environment
    AUTH_OIDC_SCOPE:                     openid profile email groups
          AUTH_OIDC_EXTRACT_GROUPS_ENABLED:    true
    Any debugging suggestions?
    d
    c
    • 3
    • 4
  • m

    microscopic-leather-94537

    03/17/2023, 10:01 AM
    hi folks ! I am using data hub , but I want to restore my datahub information . I followed the commands and steps to create backup.sql file. When I downloaded datahub on a new sytema and used the command to restore that sql backup file , I expected to get same information and restored databut I didnt any one has done it or can help me out ?
    a
    • 2
    • 1
  • b

    breezy-honey-91751

    03/17/2023, 8:50 PM
    Hi Team, when i am deploying datahub on macbook ( M1 chip ), gradle build is failed on yarn step. Can anyone help on this?
    l
    b
    • 3
    • 3
  • c

    creamy-van-28626

    03/20/2023, 7:50 AM
    Hi team Regarding the vulnerabilities we have raised GitHub issues earlier by one of my team members : https://github.com/datahub-project/datahub/security/advisories/GHSA-wxq2-3f82-2xjj https://github.com/datahub-project/datahub/security/advisories/GHSA-pq63-59c2-mxvq https://github.com/datahub-project/datahub/security/advisories/GHSA-pxmp-fgpq-7pxp https://github.com/datahub-project/datahub/security/advisories/GHSA-3p2c-f3j7-cxjm https://github.com/datahub-project/datahub/security/advisories/GHSA-524j-pgvx-4wf9 https://github.com/datahub-project/datahub/security/advisories/GHSA-2q7w-7r2r-572w https://github.com/datahub-project/datahub/security/advisories/GHSA-92v9-rh86-wgrv Can you please provide update and when we can expect these to get resolved
    l
    a
    • 3
    • 3
  • g

    gifted-diamond-19544

    03/20/2023, 10:11 AM
    Hello all! What is the impact of the env variable
    DATAHUB_SERVER_TYPE
    on the GMS container? What are the possible values for this, and what impact do the various options have? On the quickstart docker-compose this comes set as
    quickstart
    , what is the impact of this and should we change it when we deploy it? Thank you!
    l
    a
    • 3
    • 4
  • b

    brief-oyster-50637

    03/20/2023, 2:21 PM
    💡❓What's the best way of managing a customized version of DataHub, while keeping up to date with the official releases? We started this discussion in another thread and we thought it'd be interesting creating a dedicated thread for it. Maybe this is more of a broad "Open Source"/Git/CI-CD question than a DataHub one, but it will help many folks who are working on customizing DataHub. It'd be awesome getting some guidance from the DataHub team, or from someone who already deployed a customized version and maintains it. This is the approach we thought so far. We'd like to validate if it makes sure, and if there are better ways of doing it: • Fork DataHub to our organization's Github • Create a branch for our customizations • Set up CI/CD pipeline to build and deploy from our customized branch • Integrating official DataHub releases: fetch and merge into our forked master. Compare and merge into our customized brach, solving possible conflicts. Then build from this branch (CI/CD of the last step) Is this the correct/best way of doing it? Are there important gaps on it? Any inputs are very welcome!
    l
    • 2
    • 2
  • c

    cuddly-plumber-64837

    03/20/2023, 3:24 PM
    Hello all, I was wondering if it is possible to bring in Okta groups without using the ingestion piece? My team was trying to do it directly from the app, but that has not been successful so far.
    l
    a
    • 3
    • 7
  • a

    agreeable-park-13466

    03/20/2023, 9:11 PM
    Hi Team, we are trying to deploy datahub version v9.6. I wanted to built datahub using dockerfile,So i have added below lines in docker file.
    COPY ./datahub-frontend ./datahub-src/datahub-frontend
    COPY ./entity-registry ./datahub-src/entity-registry
    COPY ./buildSrc ./datahub-src/buildSrc
    COPY ./datahub-web-react ./datahub-src/datahub-web-react
    COPY ./li-utils ./datahub-src/li-utils
    COPY ./metadata-models ./datahub-src/metadata-models
    COPY ./metadata-auth ./datahub-src/metadata-auth
    COPY ./metadata-dao-impl ./datahub-src/metadata-dao-impl
    COPY ./metadata-events ./datahub-src/metadata-events
    COPY ./metadata-ingestion ./datahub-src/metadata-ingestion
    COPY ./metadata-ingestion-modules ./datahub-src/metadata-ingestion-modules
    COPY ./metadata-integration ./datahub-src/metadata-integration
    COPY ./metadata-io ./datahub-src/metadata-io
    COPY ./metadata-jobs ./datahub-src/metadata-jobs
    COPY ./metadata-models ./datahub-src/metadata-models
    COPY ./metadata-models-custom ./datahub-src/metadata-models-custom
    COPY ./metadata-models-validator ./datahub-src/metadata-models-validator
    COPY ./metadata-service ./datahub-src/metadata-service
    COPY ./metadata-utils ./datahub-src/metadata-utils
    COPY ./datahub-graphql-core ./datahub-src/datahub-graphql-core
    COPY ./gradle ./datahub-src/gradle
    
    COPY repositories.gradle gradle.properties gradlew settings.gradle build.gradle ./datahub-src/
    
    RUN chmod -R 755 ./datahub-src
    
    #RUN ./datahub-src/gradlew build
    
    RUN cd datahub-src \
        && ./gradlew :datahub-web-react:build -x test -x yarnTest -x yarnLint \
        && ./gradlew :datahub-frontend:dist -PuseSystemNode=${USE_SYSTEM_NODE} -x test -x yarnTest -x yarnLint \
        && ls -l datahub-frontend/build/distributions \
        && cp datahub-frontend/build/distributions/datahub-frontend.zip ../datahub-frontend.zip \
        && cd .. && rm -rf datahub-src && unzip datahub-frontend.zip
    ./gradlew :datahub-frontend:dist -PuseSystemNode=${USE_SYSTEM_NODE} -x test -x yarnTest -x yarnLint is getting completed successfully but cp datahub-frontend/build/distributions/datahub-frontend.zip ../datahub-frontend.zip is getting failed with below error. cp: can't stat 'datahub-frontend/build/distributions/datahub-frontend.zip': No such file or directory below is the list of files present in datahub-frontend/build/distributions:
    -rw-r--r-- 1 root root 186757120 Mar 20 20:30 datahub-frontend-0.0.0-unknown-SNAPSHOT.tar
    -rw-r--r-- 1 root root 170932974 Mar 20 20:30 datahub-frontend-0.0.0-unknown-SNAPSHOT.zip
    -rw-r--r-- 1 root root 186767360 Mar 20 20:30 main-0.0.0-unknown-SNAPSHOT.tar
    -rw-r--r-- 1 root root 170930013 Mar 20 20:30 main-0.0.0-unknown-SNAPSHOT.zip
    -rw-r--r-- 1 root root 186777600 Mar 20 20:30 playBinary-0.0.0-unknown-SNAPSHOT.tar
    -rw-r--r-- 1 root root 170938889 Mar 20 20:31 playBinary-0.0.0-unknown-SNAPSHOT.zip
    Can anyone help on this. Attached dockerfile for reference.
    Dockerfile
    l
    • 2
    • 1
  • r

    rich-salesmen-77587

    03/20/2023, 11:33 PM
    i want to capture all the schema changes and lineage changes into a kafka topic in confluent cloud.. i was able to create a actions framework app ..but the messages were not populated in kafka topics .. please help
    l
    • 2
    • 1
  • c

    creamy-van-28626

    03/21/2023, 5:28 PM
    Hi team I am creating a custom snowflake action for my pipeline and in the action configuration file I am giving action as snowflake But while running pipeline it’s giving me error failed to instantiate action pipeline
    l
    • 2
    • 2
  • c

    creamy-van-28626

    03/21/2023, 5:29 PM
    IMG_6708.jpg
  • h

    happy-baker-8735

    03/22/2023, 9:17 AM
    Hi everyone, Is it possible to delete the datahub user and to create another root user with the UI instead of the user.props file? We would like to change the pwd of the root user without this file.
    l
    b
    • 3
    • 2
  • l

    limited-forest-73733

    03/22/2023, 12:17 PM
    Hey team! I am integrating airflow with datahub using conn-type ‘datahub_kafka’, i have one doubt about schema registry URL, do we need to add this in —conn-extra or will provide only con-host that will be broker:9092. Open for all type of suggestions. Thanks:)
    l
    • 2
    • 1
  • c

    creamy-van-28626

    03/22/2023, 12:31 PM
    Hi team There is an issue with Kafka event source.I am running my action configuration file using Kafka as source it’s throwing error : Attribute error : type object ‘metadata change log event’ has no attribute construct. For every pipeline it is giving same error
    l
    • 2
    • 1
  • w

    white-guitar-82227

    03/22/2023, 12:46 PM
    Hello everyone, we just deployed Datahub to AWS EKS for evaluation and we wonder if there are any good practices in respect to volume sizing. Of course it depends on the use case, but first thing that strikes me as a potentially way too big storage allocation is zookeeper claiming an 8Gi PV. Do I understand correctly, that Zookeeper is only used to let Kafka nodes find each other? In such a case I would give it rather a few Mi instead. Is there something like a dimensioning guide in general? Thanks!
    l
    • 2
    • 1
  • b

    brash-caravan-14114

    03/22/2023, 2:37 PM
    Hey team! I am trying to set up Datahub on AWS, using managed services (EKS, OpenSearch, RDS MySQL, MSK, Glue). I would like to configure authentication to MSK and Glue using IAM. I have configured a serviceAccount to assume a role using OIDC (docs). This works well with the kafka-setup-job and I see the topics are created as expected. However, when running the system-upgrade-job there seems to be a problem, the java SDK is using the instance profile (role attached to the EKS node) instead of the configured serviceAccount. This causes the pod to fail since the instance profile role does not have permissions to Glue. I have followed the steps described in this guide, and verified the serviceAccount works with other pods. Is there anything I can configure on the Java SDK level to use the correct role? The other option is to ditch Glue and move to cp-schema-registry, but then I need to authenticate to MSK from cp-schema-registry, which is again not possible without modifying the image…. Thank you very much!!
    l
    • 2
    • 1
Powered by Linen
Title
b

brash-caravan-14114

03/22/2023, 2:37 PM
Hey team! I am trying to set up Datahub on AWS, using managed services (EKS, OpenSearch, RDS MySQL, MSK, Glue). I would like to configure authentication to MSK and Glue using IAM. I have configured a serviceAccount to assume a role using OIDC (docs). This works well with the kafka-setup-job and I see the topics are created as expected. However, when running the system-upgrade-job there seems to be a problem, the java SDK is using the instance profile (role attached to the EKS node) instead of the configured serviceAccount. This causes the pod to fail since the instance profile role does not have permissions to Glue. I have followed the steps described in this guide, and verified the serviceAccount works with other pods. Is there anything I can configure on the Java SDK level to use the correct role? The other option is to ditch Glue and move to cp-schema-registry, but then I need to authenticate to MSK from cp-schema-registry, which is again not possible without modifying the image…. Thank you very much!!
l

lively-cat-88289

03/22/2023, 2:37 PM
Hey there 👋 I'm The DataHub Community Support bot. I'm here to help make sure the community can best support you with your request. Let's double check a few things first: 1️⃣ There's a lot of good information on our docs site: www.datahubproject.io/docs, Have you searched there for a solution? 2️⃣ It's not uncommon that someone has run into your exact problem before in the community. Have you searched Slack for similar issues?
View count: 1