https://datahubproject.io logo
Join SlackCommunities
Powered by
# all-things-deployment
  • b

    bumpy-journalist-41369

    08/30/2022, 11:31 AM
    I have a problem with Datahub to Amazon MKS. Following the documentation – https://datahubproject.io/docs/deploy/aws I have created a MKS cluster and changes to global- kafka section in values.yaml to the following: kafka: bootstrap: server: “<bootstrap_server_1>9092,&lt;bootstrap server 2&gt;9092” zookeeper: server: “<zookeeper_server_1>:2182, “<zookeeper_server_2>::2182, “<zookeeper_server_3>::2182” ## For AWS MSK set this to a number larger than 1 partitions: 2 replicationFactor: 2 schemaregistry: url: “http://prerequisites-cp-schema-registry:8081” # type: AWS_GLUE # glue: # region: us-east-1 # registry: datahub For <bootstrap_server_1>9092,&lt;bootstrap server 2&gt;9092 I have copied the values from View client information- Bootstrap servers- plaintext, And respectively for “<zookeeper_server_1>:2182, “<zookeeper_server_2>::2182, “<zookeeper_server_3>::2182 From Apache ZooKeeper connection - plaintext However when I run helm upgrade --install datahub datahub/datahub --values values.yaml the upgrade fails and the logs datahub-kafka-setup-job becomes in status Error. Looking at its logs I see the following problem: [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 6.1.4-ccs [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c9124241a6ff43bc [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1661858565207 WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. WARNING: Due to limitations in metric names, topics with a period (‘.’) or underscore (‘_’) could collide. To avoid issues it is best to use either, but not both. Error while executing config command with args ‘--command-config /tmp/connection.properties --bootstrap-server b-2.datahubcbdevkafka.qdtcf8.c16.kafka.us-east-1.amazonaws.com:9092,b-1.datahubcbdevkafka.qdtcf8.c16.kafka.us-east-1.amazonaws.com:9092 --entity-type topics --entity-name _schemas --alter --add-config cleanup.policy=compact’ java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104) at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272) at kafka.admin.ConfigCommand$.getResourceConfig(ConfigCommand.scala:552) at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:322) at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:302) at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:97) at kafka.admin.ConfigCommand.main(ConfigCommand.scala) Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: Has anyone encountered this before and can help me fix it?
    l
    b
    • 3
    • 2
  • g

    great-branch-515

    08/30/2022, 1:22 PM
    @here I am now getting another error in mysql setup job
    Copy code
    ERROR 3159 (HY000): Connections using insecure transport are prohibited while --require_secure_transport=ON.
    2022/08/30 12:51:12 Command exited with error: exit status 1
    if someone can help
  • g

    great-branch-515

    08/30/2022, 3:09 PM
    Getting errors in datahub-acryl-datahub-actions like
    Copy code
    %6|1661871831.917|FAIL|rdkafka#consumer-1| [thrd:<redacted>:9094/b]: redacted>:9094/bootstrap: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 0ms in state APIVERSION_QUERY, 4 identical error(s) suppressed)
    d
    a
    +2
    • 5
    • 28
  • r

    rapid-house-76230

    08/30/2022, 6:16 PM
    Hi team, even though I changed my ingress annotations scheme to
    internal
    , I am still seeing it being
    internet-facing
    on the EC2 Load Balancer Page (on AWS). I’ve deleted the LBs and redeployed but no luck. Any pointers?
    Copy code
    datahub-frontend:
      ingress:
        enabled: true
        annotations:
          <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
          <http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>: http
          <http://service.beta.kubernetes.io/aws-load-balancer-ssl-ports|service.beta.kubernetes.io/aws-load-balancer-ssl-ports>: https
          <http://alb.ingress.kubernetes.io/scheme|alb.ingress.kubernetes.io/scheme>: internal
    g
    b
    • 3
    • 4
  • b

    busy-computer-98970

    08/30/2022, 6:36 PM
    Hellooooou guys! SOmeone have deployed datahub in a AWS Fargate Enviroment?
  • g

    great-branch-515

    08/31/2022, 5:53 AM
    @here We are facing another issue. MceConsumerApplication and MaeConsumerApplication are failing to start with the following stacktrace. It is not able to find class /BaseHttpSolrClient
    Copy code
    Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
    05:36:38.294 [main] ERROR o.s.boot.SpringApplication - Application run failed
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'healthContributorRegistry' defined in class path resource [org/springframework/boot/actuate/autoconfigure/health/HealthEndpointConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributorRegistry]: Factory method 'healthContributorRegistry' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'solrHealthContributor' defined in class path resource [org/springframework/boot/actuate/autoconfigure/solr/SolrHealthContributorAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributor]: Factory method 'solrHealthContributor' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/solr/client/solrj/impl/BaseHttpSolrClient$RemoteSolrException
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
    	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
    	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:953)
    	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918)
    	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583)
    	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145)
    	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
    	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:448)
    	at org.springframework.boot.SpringApplication.run(SpringApplication.java:339)
    	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1365)
    	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1354)
    	at com.linkedin.metadata.kafka.MceConsumerApplication.main(MceConsumerApplication.java:19)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
    	at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
    	at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
    	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
    Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributorRegistry]: Factory method 'healthContributorRegistry' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'solrHealthContributor' defined in class path resource [org/springframework/boot/actuate/autoconfigure/solr/SolrHealthContributorAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributor]: Factory method 'solrHealthContributor' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/solr/client/solrj/impl/BaseHttpSolrClient$RemoteSolrException
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653)
    	... 27 common frames omitted
    Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'solrHealthContributor' defined in class path resource [org/springframework/boot/actuate/autoconfigure/solr/SolrHealthContributorAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributor]: Factory method 'solrHealthContributor' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/solr/client/solrj/impl/BaseHttpSolrClient$RemoteSolrException
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582)
    	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
    	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
    	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
    	at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:671)
    	at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:659)
    	at org.springframework.context.support.AbstractApplicationContext.getBeansOfType(AbstractApplicationContext.java:1300)
    	at org.springframework.boot.actuate.autoconfigure.health.HealthEndpointConfiguration.healthContributorRegistry(HealthEndpointConfiguration.java:82)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
    	... 28 common frames omitted
    Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.boot.actuate.health.HealthContributor]: Factory method 'solrHealthContributor' threw exception; nested exception is java.lang.NoClassDefFoundError: org/apache/solr/client/solrj/impl/BaseHttpSolrClient$RemoteSolrException
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)
    	at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653)
    	... 46 common frames omitted
    Caused by: java.lang.NoClassDefFoundError: org/apache/solr/client/solrj/impl/BaseHttpSolrClient$RemoteSolrException
    	at java.lang.Class.getDeclaredConstructors0(Native Method)
    	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
    	at java.lang.Class.getConstructor0(Class.java:3075)
    	at java.lang.Class.getDeclaredConstructor(Class.java:2178)
    	at org.springframework.boot.actuate.autoconfigure.health.AbstractCompositeHealthContributorConfiguration.createIndicator(AbstractCompositeHealthContributorConfiguration.java:64)
    	at org.springframework.boot.actuate.autoconfigure.health.AbstractCompositeHealthContributorConfiguration.createContributor(AbstractCompositeHealthContributorConfiguration.java:54)
    	at org.springframework.boot.actuate.autoconfigure.solr.SolrHealthContributorAutoConfiguration.solrHealthContributor(SolrHealthContributorAutoConfiguration.java:54)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
    	... 47 common frames omitted
    Caused by: java.lang.ClassNotFoundException: org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException
    	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    	at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151)
    	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    	... 59 common frames omitted
    ANTLR Tool version 4.5 used for code generation does not match the current runtime version 4.8ANTLR Runtime version 4.5 used for parser compilation does not match the current runtime version 4.8ANTLR Tool version 4.5 used for code generation does not match the current runtime version 4.8ANTLR Runtime version 4.5 used for parser compilation does not match the current runtime version 4.82022/08/31 05:39:27 Received signal: terminated
    2022/08/31 05:39:28 Command exited with error: exit status 143
    Has anyone faced this issue? This issue is coming in new setup we are trying. Please help!!
    b
    • 2
    • 3
  • f

    full-chef-85630

    08/31/2022, 9:08 AM
    Hi all, perform tasks using airflow, how to manage "properties" and "view in airflow "
    d
    m
    • 3
    • 13
  • f

    full-chef-85630

    08/31/2022, 9:14 AM
    datajob,how to manage task log
    d
    • 2
    • 1
  • w

    wonderful-author-3020

    08/31/2022, 3:48 PM
    Hello all, Is it required to run the
    datahub-upgrade
    container to upgrade from 0.8.18 to the newest version or can I just bump the containers version?
  • g

    great-branch-515

    09/01/2022, 5:10 AM
    @here Today, we are seeing these errors in GMS service logs when frontend tries to connect with GMS service. It was all working yesterday. But today we are seeing these errors. We are using aurora backend. And gms service is running on spot instances. Can anyone help?
    Copy code
    04:52:07.924 [Thread-92] ERROR c.l.d.g.e.DataHubDataFetcherExceptionHandler:21 - Failed to execute DataFetcher
    java.util.concurrent.CompletionException: javax.persistence.PersistenceException: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=<redacted>)(port=3306)(type=master) : Could not connect to <redacted>:3306 : PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
    	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
    	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: javax.persistence.PersistenceException: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=<redacted>)(port=3306)(type=master) : Could not connect to <redacted>:3306 : PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    	at io.ebeaninternal.server.transaction.TransactionFactoryBasic.createQueryTransaction(TransactionFactoryBasic.java:35)
    	at io.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:360)
    	at io.ebeaninternal.server.core.DefaultServer.createQueryTransaction(DefaultServer.java:2306)
    	at io.ebeaninternal.server.core.OrmQueryRequest.initTransIfRequired(OrmQueryRequest.java:282)
    	at io.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1595)
    	at io.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1574)
    	at io.ebeaninternal.server.querydefn.DefaultOrmQuery.findList(DefaultOrmQuery.java:1481)
    	at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGetUnion(EbeanAspectDao.java:359)
    	at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGet(EbeanAspectDao.java:279)
    	at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGet(EbeanAspectDao.java:260)
    	at com.linkedin.metadata.entity.EntityService.getEnvelopedAspects(EntityService.java:1504)
    	at com.linkedin.metadata.entity.EntityService.getCorrespondingAspects(EntityService.java:353)
    	at com.linkedin.metadata.entity.EntityService.getLatestEnvelopedAspects(EntityService.java:307)
    	at com.linkedin.metadata.entity.EntityService.getEntitiesV2(EntityService.java:263)
    	at com.linkedin.entity.client.JavaEntityClient.batchGetV2(JavaEntityClient.java:103)
    	at com.linkedin.datahub.graphql.resolvers.MeResolver.lambda$get$0(MeResolver.java:52)
    	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
    	... 1 common frames omitted
    • 1
    • 2
  • b

    better-fireman-33387

    09/01/2022, 6:08 AM
    Hi, datahub was up and running using helm, after trying to ingest with mysql source needed to reinstall all charts and the installation is failing now. nothing has changed in my values.yaml files and the configuration stayed the same. I suspect mysql pod is not ready, describe pod is in the thread any help?
    b
    • 2
    • 32
  • t

    thankful-vr-12699

    09/01/2022, 9:21 AM
    Hi Everyone, We would like to have our own local db. Do you know if mariaDB v10.5.8 is fully compatible with Datahub?
    g
    b
    • 3
    • 4
  • t

    thousands-solstice-2498

    09/01/2022, 9:43 AM
    Please advise. Normal Pulled 98s kubelet Successfully pulled image "acryldata/datahub-postgres-setup:v0.8.41" in 1.009966673s Warning Failed 82s (x8 over 2m57s) kubelet Error: secret "mysql-secrets" not found Normal Pulled 82s kubelet Successfully pulled image "acryldata/datahub-postgres-setup:v0.8.41" in 1.00352865s Normal Pulling 69s (x9 over 3m) kubelet Pulling image "acryldata/datahub-postgres-setup:v0.8.41"
    b
    • 2
    • 1
  • r

    rapid-book-98432

    09/01/2022, 10:29 AM
    Hey folks ! I still have some issue trying to deploy the helm chart on minikube... Having permission denied issue for bitnami mysql service. If anyone here succeeded to install it... Some help will be appreciated :') !
    b
    • 2
    • 33
  • r

    rapid-book-98432

    09/01/2022, 10:32 AM
    Anyway my goal is to have 2 instances of datahub "custom" deployed on a private server : One for CD / One as Demo, but i still can't got one running with that error... Do you think that using microk8s could be helpful ?
  • f

    faint-translator-23365

    09/01/2022, 12:14 PM
    Hi. Where can I find the documentation for the older version of datahub, we are currently using v0.8.41 of datahub.
    b
    • 2
    • 1
  • t

    thousands-solstice-2498

    09/01/2022, 2:05 PM
    Hi Team, Please advise the error. -- create default records for datahub user if not exists CREATE TEMP TABLE temp_metadata_aspect_v2 AS TABLE metadata_aspect_v2; INSERT INTO temp_metadata_aspect_v2 (urn, aspect, version, metadata, createdon, createdby) VALUES( 'urnlicorpuser:datahub', 'corpUserInfo', 0, '{"displayName":"Data Hub","active":true,"fullName":"Data Hub","email":"datahub@linkedin.com"}', now(), 'urnlicorpuser:__datahub_system' ), ( 'urnlicorpuser:datahub', 'corpUserEditableInfo', 0, '{"skills":[],"teams":[],"pictureLink":"

    https://raw.githubusercontent.com/datahub-project/datahub/master/datahub-web-react/src/images/default_avatar.png▾

    "}', now(), 'urnlicorpuser:__datahub_system' ); -- only add default records if metadata_aspect is empty INSERT INTO metadata_aspect_v2 SELECT * FROM temp_metadata_aspect_v2 WHERE NOT EXISTS (SELECT * from metadata_aspect_v2); DROP TABLE temp_metadata_aspect_v2; psql: error: connection to server at "10.240.154.202", port 5432 failed: FATAL: database "datahub" does not exist 2022/09/01 133151 Command exited with error: exit status 2
    b
    • 2
    • 24
  • g

    great-toddler-2251

    09/01/2022, 9:40 PM
    while DataHub supports OIDC via Okta has anyone tried using Auth0 instead? While Auth0 was bought by Okta, it’s a separate product, with separate configuration steps. Before I go and see if I can get it working, I was wondering if anyone had already tried it?
  • t

    thousands-solstice-2498

    09/02/2022, 4:23 AM
    Hi Team, Datahub is expecting to have
    db_name
    same as
    db_user
    name? please confirm.
  • f

    full-chef-85630

    09/02/2022, 4:33 AM
    @dazzling-judge-80093 Hi,How to delete through rest API,delete platform、dataset etc.
    Copy code
    curl "<http://xxxxx/entities?action=delete>" -X POST --data '{"urn":"urn:li:dataPlatform:bigquery"}' -H "Authorization: Bearer xxxx"
  • s

    shy-dog-84302

    09/02/2022, 4:41 AM
    Hi, I’m facing issues in creating an umbrella chart with Helm3 to deploy Datahub and it’s prerequisites from a single chart in GKE. My chart looks like this:
    Copy code
    apiVersion: v2
    name: my-datahub
    description: An umbrella chart for Kubernetes deployment of Datahub and it's prerequisites
    type: application
    version: 0.1.0
    appVersion: 1.16.0
    dependencies:
      - name: datahub-prerequisites
        repository: <https://helm.datahubproject.io>
        version: 0.0.9
        condition: prerequisites.enabled
      - name: datahub
        repository: <https://helm.datahubproject.io>
        version: 0.2.92
        condition: datahub.enabled
    values.yaml look like this:
    Copy code
    prerequisites:
      enabled: true
    datahub:
      enabled: true
    pre-deployment steps I ran: 1.
    helm dependency update -n <namespace>
    This created the
    charts
    folder and downloaded charts
    datahub-prerequisites-0.0.9.tgz
    and
    datahub-0.2.92.tgz
    2.
    helm upgrade --install --debug my-datahub . --values=values.yaml -n <namespace>
    Here is the output from chart install
    Copy code
    history.go:53: [debug] getting history for release my-datahub
    Release "my-datahub" does not exist. Installing it now.
    install.go:172: [debug] Original chart version: ""
    install.go:189: [debug] CHART PATH: <path to my chart folder>
    
    client.go:254: [debug] Starting delete for "my-datahub-elasticsearch-setup-job" Job
    client.go:108: [debug] creating 1 resource(s)
    client.go:463: [debug] Watching for changes to Job my-datahub-elasticsearch-setup-job with timeout of 5m0s
    client.go:491: [debug] Add/Modify event for my-datahub-elasticsearch-setup-job: ADDED
    client.go:530: [debug] my-datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
    client.go:491: [debug] Add/Modify event for my-datahub-elasticsearch-setup-job: MODIFIED
    client.go:530: [debug] my-datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 1, jobs succeeded: 0
    client.go:491: [debug] Add/Modify event for my-datahub-elasticsearch-setup-job: MODIFIED
    client.go:530: [debug] my-datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 2, jobs succeeded: 0
    Error: failed pre-install: timed out waiting for the condition
    helm.go:94: [debug] failed pre-install: timed out waiting for the condition
    Issues observed: 1. I see that the chart is trying to run
    elasticsearch-setup-job
    first which actually depends on the installation of
    elasticsearch-master
    service. Which obviously times out after 5m. 2. What is that I’m missing here that leads to not running the jobs in a proper order? Any help would be greatly appreciated 🙂
    b
    • 2
    • 7
  • t

    thousands-solstice-2498

    09/02/2022, 5:43 AM
    Still failing.. s0g09ba@m-c02f5axamd6n sg-rcube-datahub % kubectl logs sg-rcube-datahub-postgresql-setup-job-9jlbx -n p1978837828 2022/09/02 053354 Waiting for: tcp://10.240.154.202:5432 2022/09/02 053354 Connected to tcp://10.240.154.202:5432 psql: error: connection to server at "10.240.154.202", port 5432 failed: FATAL: database "dcflow_rw" does not exist psql: error: connection to server at "10.240.154.202", port 5432 failed: FATAL: database "dcflow_rw" does not exist -- create metadata aspect table CREATE TABLE IF NOT EXISTS metadata_aspect_v2 ( urn varchar(500) not null, aspect varchar(200) not null, version bigint not null, metadata text not null, systemmetadata text, createdon timestamp not null, createdby varchar(255) not null, createdfor varchar(255), CONSTRAINT pk_metadata_aspect_v2 PRIMARY KEY (urn, aspect, version) ); -- create default records for datahub user if not exists CREATE TEMP TABLE temp_metadata_aspect_v2 AS TABLE metadata_aspect_v2; INSERT INTO temp_metadata_aspect_v2 (urn, aspect, version, metadata, createdon, createdby) VALUES( 'urnlicorpuser:datahub', 'corpUserInfo', 0, '{"displayName":"Data Hub","active":true,"fullName":"Data Hub","email":"datahub@linkedin.com"}', now(), 'urnlicorpuser:__datahub_system' ), ( 'urnlicorpuser:datahub', 'corpUserEditableInfo', 0, '{"skills":[],"teams":[],"pictureLink":"

    https://raw.githubusercontent.com/datahub-project/datahub/master/datahub-web-react/src/images/default_avatar.png▾

    "}', now(), 'urnlicorpuser:__datahub_system' ); -- only add default records if metadata_aspect is empty INSERT INTO metadata_aspect_v2 SELECT * FROM temp_metadata_aspect_v2 WHERE NOT EXISTS (SELECT * from metadata_aspect_v2); DROP TABLE temp_metadata_aspect_v2; psql: error: connection to server at "10.240.154.202", port 5432 failed: FATAL: database "datahub" does not exist
  • l

    late-insurance-69310

    09/03/2022, 6:53 PM
    can someone please provide a resource to make a production ready datahub deployment in an on prem machine with Nginx
    b
    • 2
    • 1
  • l

    late-insurance-69310

    09/03/2022, 8:40 PM
    anyone tried deployment of datahub in rancher os
  • p

    proud-table-38689

    09/05/2022, 1:16 AM
    for the helm chart, what’s a good value for
    global.sql.datasource.driver
    if we wanted to use Postgres instead of MySQL? https://github.com/acryldata/datahub-helm/tree/master/charts/datahub
    b
    • 2
    • 1
  • c

    creamy-controller-55842

    09/05/2022, 11:20 AM
    Hi community Is there any way where we can deploy the datahub without docker ?
    b
    • 2
    • 3
  • s

    shy-dog-84302

    09/05/2022, 1:12 PM
    Hi! I am trying to deploy Datahub in kebernetes with datahub-helm repository. But while configuring/customising Ingress for datahub-frontend and datahub-gms I came across a problem with lack of support for adding custom labels to Ingress. My organization needs some special labels to route the traffic and setup firewalls. My current Ingress config labels look like here:
    Copy code
    Name:             datahub-datahub-frontend
    Labels:           <http://app.kubernetes.io/instance=datahub|app.kubernetes.io/instance=datahub>
                      <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
                      <http://app.kubernetes.io/name=datahub-frontend|app.kubernetes.io/name=datahub-frontend>
                      <http://app.kubernetes.io/version=0.3.3|app.kubernetes.io/version=0.3.3>
                      <http://helm.sh/chart=datahub-frontend-0.2.4|helm.sh/chart=datahub-frontend-0.2.4>
    I would like to add another label to the end of the list like here:
    Copy code
    Name:             datahub-datahub-frontend
    Labels:           <http://app.kubernetes.io/instance=datahub|app.kubernetes.io/instance=datahub>
                      <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
                      <http://app.kubernetes.io/name=datahub-frontend|app.kubernetes.io/name=datahub-frontend>
                      <http://app.kubernetes.io/version=0.3.3|app.kubernetes.io/version=0.3.3>
                      <http://helm.sh/chart=datahub-frontend-0.2.4|helm.sh/chart=datahub-frontend-0.2.4>
                      traffic-type=public
    Is there any way to do it with current helm repo config or it requires a change?
    b
    • 2
    • 6
  • f

    full-chef-85630

    09/05/2022, 1:34 PM
    Hi,the ingestion metadata(bigquery),multiple tables have the same structure but different table names, such as user_ 1,user_ 2,Why is there only table user in the final dataset。Is there any logic for merging? If so, can it be canceled
    l
    • 2
    • 2
  • c

    cuddly-arm-8412

    09/06/2022, 2:31 AM
    hi,team,I find that when there is too much data, the handsome selection on the left side of the search can display up to 20 too. Can we release this? How can we release it?
    b
    • 2
    • 4
  • c

    colossal-needle-73093

    09/06/2022, 3:09 AM
    hello, how to get chart version list ? --------------------------------------------------------------------------------------------------- root@ecs-16a0-ab86:~# helm search repo datahub NAME CHART VERSION APP VERSION DESCRIPTION datahub/datahub 0.2.93 0.8.44 A Helm chart for LinkedIn DataHub datahub/datahub-prerequisites 0.0.9 A Helm chart for packages that Datahub depends on --------------------------------------------------------------------------------------------------- For some reasons, I need to keep the datahub v0.8.41. Is there any guy know how to get the previous release datahub charts and it's dependency datahub-prerequisites chart? Thank you so much if you can delivery the hint.
    b
    • 2
    • 2
1...212223...53Latest