https://datahubproject.io logo
Join SlackCommunities
Powered by
# all-things-deployment
  • p

    prehistoric-salesclerk-23462

    04/21/2022, 9:27 AM
    Hi everyone, I am trying to deploy the datahub on our custom k8s cluster (not EKS) on aws. I have provisioned managed pre-requisites services (MSK, OpenSearch and RDS). Schema-registry will still be the pod which will use MSK. Did anyone already had such scenario? I following this guide but facing a lot of issues in the deployment https://datahubproject.io/docs/deploy/aws
    s
    i
    e
    • 4
    • 7
  • r

    rapid-book-98432

    04/21/2022, 11:55 AM
    Hi there 🙂 Have you ever deployed multiple dahahub version on the same server ? Only ports issues can happen ?
    s
    • 2
    • 6
  • r

    rapid-book-98432

    04/21/2022, 2:03 PM
    Hi again, I'm having some issue while deploying on minukube with helm chart onprem server :
    root@vmi741747:~# kubectl logs datahub-elasticsearch-setup-job-cc2lq
    2022/04/21 131735 Waiting for: http://elasticsearch-master:9200
    2022/04/21 131736 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131738 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131740 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131742 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131744 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131746 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131748 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    2022/04/21 131750 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.103.69.1779200 connect: connection refused. Sleeping 1s
    And then failing. If you have any idea i'm open to heard. Thanks again !
    s
    i
    • 3
    • 33
  • w

    wonderful-quill-11255

    04/22/2022, 6:35 AM
    Good morning. Not sure if anyone has already posted this but there is a new critical java security bug announced. CVE-2022-21449. It's mentioned here that even java 1.8 needs to be updated.
    b
    • 2
    • 2
  • c

    creamy-van-28626

    04/22/2022, 7:55 AM
    Hi team, @square-activity-64562 and @big-carpet-38439 how can I resolve issue that we are facing from front end UI ingestion
    s
    a
    b
    • 4
    • 5
  • l

    loud-kite-94877

    04/25/2022, 7:36 AM
    mysql -u $MYSQL_USERNAME -p"$MYSQL_PASSWORD" -h $MYSQL_HOST < /tmp/init-final.sql $MYSQL_PORT is missing in mysql-setup-job 's shell(/docker/mysql-setup/init.sh)
    g
    d
    • 3
    • 3
  • f

    fresh-napkin-5247

    04/25/2022, 1:28 PM
    Hello guys. I am trying to estimate the costs of running Datahub on Kubernetes on AWS, however I am not sure how to approach this (the fact that I am not that familiar with Kubernetes does not help). Would it be possible for someone to help me ?
    e
    • 2
    • 3
  • p

    prehistoric-salesclerk-23462

    04/25/2022, 3:57 PM
    Hi guys, did anyone face this error in datahub-datahub-upgrade during the deployment on aws?
    Copy code
    Error creating bean with name 'upgradeCli': Unsatisfied dependency expressed through field 'noCodeUpgrade'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ebeanServer' defined in class path resource [com/linkedin/gms/factory/entity/EbeanServerFactory.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.ebean.EbeanServer]: Factory method 'createServer' threw exception; nested exception is java.lang.NullPointerException
    datahub-datahub-gms i also failing
    Copy code
    ERROR o.s.web.context.ContextLoader:313 - Context initialization failed
    org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'mostPopularCandidateSourceFactory': Unsatisfied dependency expressed through field 'entityService'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ebeanAspectDao' defined in com.linkedin.gms.factory.entity.EbeanAspectDaoFactory: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.linkedin.metadata.entity.ebean.EbeanAspectDao]: Factory method 'createInstance' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ebeanServer' defined in com.linkedin.gms.factory.entity.EbeanServerFactory: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.ebean.EbeanServer]: Factory method 'createServer' threw exception; nested exception is java.lang.NullPointerException
    e
    e
    • 3
    • 5
  • h

    handsome-football-66174

    04/25/2022, 7:58 PM
    Hi Everyone, Quick Question - We have configured OIDC for logging in. We notice that when we click on logout, it again redirects to the authentication , rather than logout screen. Anything that we need to configure for logout ?
    e
    b
    +2
    • 5
    • 13
  • b

    better-football-97389

    04/26/2022, 3:55 AM
    Hi EveryOne ,I got an error after I deploy datahub without Docker.Here is the error Message. Frontend:
    Copy code
    11:46:01 [application-akka.actor.default-dispatcher-3] ERROR application -
    
    ! @7ne888b20 - Internal server error, for (POST) [/logIn] ->
    
    play.api.UnexpectedException: Unexpected exception[RuntimeException: Failed to generate session token for user]
    	at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:247)
    	at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:176)
    	at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:363)
    	at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:361)
    	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346)
    	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345)
    	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
    	at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    	at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:92)
    	at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:92)
    	at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:92)
    	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    	at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
    	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
    	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49)
    	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    Caused by: java.lang.RuntimeException: Failed to generate session token for user
    	at client.AuthServiceClient.generateSessionTokenForUser(AuthServiceClient.java:80)
    	at controllers.AuthenticationController.logIn(AuthenticationController.java:141)
    	at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$5$$anonfun$apply$5.apply(Routes.scala:456)
    	at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$5$$anonfun$apply$5.apply(Routes.scala:456)
    	at play.core.routing.HandlerInvokerFactory$$anon$3.resultCall(HandlerInvoker.scala:134)
    	at play.core.routing.HandlerInvokerFactory$$anon$3.resultCall(HandlerInvoker.scala:133)
    	at play.core.routing.HandlerInvokerFactory$JavaActionInvokerFactory$$anon$8$$anon$2$$anon$1.invocation(HandlerInvoker.scala:108)
    	at play.core.j.JavaAction$$anon$1.call(JavaAction.scala:88)
    	at play.http.DefaultActionCreator$1.call(DefaultActionCreator.java:31)
    	at play.core.j.JavaAction$$anonfun$9.apply(JavaAction.scala:138)
    	at play.core.j.JavaAction$$anonfun$9.apply(JavaAction.scala:138)
    	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    	at play.core.j.HttpExecutionContext$$anon$2.run(HttpExecutionContext.scala:56)
    	at play.api.libs.streams.Execution$trampoline$.execute(Execution.scala:70)
    	at play.core.j.HttpExecutionContext.execute(HttpExecutionContext.scala:48)
    	at scala.concurrent.impl.Future$.apply(Future.scala:31)
    	at scala.concurrent.Future$.apply(Future.scala:494)
    	at play.core.j.JavaAction.apply(JavaAction.scala:138)
    	at play.api.mvc.Action$$anonfun$apply$2.apply(Action.scala:96)
    	at play.api.mvc.Action$$anonfun$apply$2.apply(Action.scala:89)
    	at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
    	at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
    	... 13 common frames omitted
    Caused by: java.lang.RuntimeException: Bad response from the Metadata Service: HTTP/1.1 503 Service Unavailable ResponseEntityProxy{[Content-Type: text/html;charset=iso-8859-1,Content-Length: 369,Chunked: false]}
    	at client.AuthServiceClient.generateSessionTokenForUser(AuthServiceClient.java:76)
    	... 35 common frames omitted
    GMS:
    Copy code
    2022-04-26 11:46:01.855:WARN:oejs.HttpChannel:qtp1637506559-13: /auth/generateSessionTokenForUser
    java.lang.NullPointerException
    	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591)
    	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:542)
    	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:536)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
    	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
    	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
    	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
    	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at org.eclipse.jetty.server.Server.handle(Server.java:494)
    	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
    	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
    	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
    	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
    	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
    	at java.lang.Thread.run(Thread.java:750)
    e
    • 2
    • 2
  • s

    steep-soccer-91284

    04/26/2022, 5:21 AM
    Is it problem of lack of CPU? I’ve faced this error while deploying helm chart of datahub into my EKS cluster.
    e
    s
    • 3
    • 7
  • c

    creamy-van-28626

    04/26/2022, 9:28 AM
    Hi team, I have few questions: I need to check on few things : 1. Why we are using MySQL database instead of Postgres sql 2. And what's the difference in bitnami MySQL or datahub MySQL and bitnami Kafka or bitnami Kafka 3. Why we are using Kafka bitnami in datahub prerequisites and datahub -Kafka in datahub ?
    d
    b
    • 3
    • 4
  • s

    steep-soccer-91284

    04/26/2022, 1:46 PM
    [main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server prerequisites-kafka:9092 from bootstrap.servers as DNS resolution failed for prerequisites-kafka [main] ERROR io.confluent.admin.utils.cli.KafkaReadyCommand - Error while running kafka-ready. org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:499) at org.apache.kafka.clients.admin.Admin.create(Admin.java:73) at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49) at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:138) at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150) Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:455) ... 4 more I don’t know why it happens
    e
    • 2
    • 4
  • b

    better-football-97389

    04/27/2022, 7:00 AM
    Hi EveryOne ! I found a error when I click the button
    Analytics
    .
    Copy code
    14:42:59.322 [Thread-83] ERROR c.l.d.g.a.service.AnalyticsService:264 - Search query failed: Elasticsearch exception [type=index_not_found_exception, reason=no such index [datahub_usage_event]]
    14:42:59.322 [Thread-83] ERROR c.l.d.g.e.DataHubDataFetcherExceptionHandler:21 - Failed to execute DataFetcher
    java.lang.RuntimeException: Search query failed:
    	at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.executeAndExtract(AnalyticsService.java:265)
    	at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.getTimeseriesChart(AnalyticsService.java:99)
    	at com.linkedin.datahub.graphql.analytics.resolver.GetChartsResolver.getProductAnalyticsCharts(GetChartsResolver.java:77)
    	at com.linkedin.datahub.graphql.analytics.resolver.GetChartsResolver.get(GetChartsResolver.java:50)
    	at com.linkedin.datahub.graphql.analytics.resolver.GetChartsResolver.get(GetChartsResolver.java:37)
    	at graphql.execution.ExecutionStrategy.fetchField(ExecutionStrategy.java:270)
    	at graphql.execution.ExecutionStrategy.resolveFieldWithInfo(ExecutionStrategy.java:203)
    	at graphql.execution.AsyncExecutionStrategy.execute(AsyncExecutionStrategy.java:60)
    	at graphql.execution.Execution.executeOperation(Execution.java:165)
    	at graphql.execution.Execution.execute(Execution.java:104)
    	at graphql.GraphQL.execute(GraphQL.java:557)
    	at graphql.GraphQL.parseValidateAndExecute(GraphQL.java:482)
    	at graphql.GraphQL.executeAsync(GraphQL.java:446)
    	at graphql.GraphQL.execute(GraphQL.java:377)
    	at com.linkedin.datahub.graphql.GraphQLEngine.execute(GraphQLEngine.java:88)
    	at com.datahub.graphql.GraphQLController.lambda$postGraphQL$0(GraphQLController.java:89)
    	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
    	at java.lang.Thread.run(Thread.java:750)
    Caused by: org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=index_not_found_exception, reason=no such index [datahub_usage_event]]
    	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:187)
    	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
    	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1869)
    	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1626)
    	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
    	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
    	at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1069)
    	at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.executeAndExtract(AnalyticsService.java:260)
    	... 17 common frames omitted
    	Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [<http://192.168.154.130:9200>], URI [/datahub_usage_event/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 404 Not Found]
    {"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [datahub_usage_event]","resource.type":"index_or_alias","resource.id":"datahub_usage_event","index_uuid":"_na_","index":"datahub_usage_event"}],"type":"index_not_found_exception","reason":"no such index [datahub_usage_event]","resource.type":"index_or_alias","resource.id":"datahub_usage_event","index_uuid":"_na_","index":"datahub_usage_event"},"status":404}
    		at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:302)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:272)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:246)
    		at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1613)
    		... 21 common frames omitted
    b
    g
    • 3
    • 6
  • a

    adorable-receptionist-20059

    04/27/2022, 4:20 PM
    Does anyone have examples of what instance types they used for Production DataHub for AWS? Creating a Cost Breakdown for an RFC. Kafka MSK my current assumption: t3.small x 2 (I hope I would not need a large) OpenSearch ElasticSearch Cluster my current assumption: t3.medium RDS MySQL current assumption: t3.medium (think small may be enough but not sure)
    e
    e
    • 3
    • 4
  • c

    creamy-van-28626

    04/27/2022, 5:37 PM
    Hi team, I am trying to ingest metadata using source as busineas glossary but I am getting this error The glossary file I have used the sample one . https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/bootstrap_data/business_glossary.yml Please refer recipefile and error :
    e
    b
    s
    • 4
    • 23
  • c

    creamy-van-28626

    04/28/2022, 11:41 AM
    Hi team We are using linux based images in database. Is there is any specific reasons or why we did not built images on Debian or alpine ?
    e
    e
    • 3
    • 17
  • m

    modern-zoo-97059

    04/29/2022, 12:38 AM
    hello everyone. I just wonder.. what is the main purpose of using
    mysql
    in datahub datahub? Why do you use
    mysql
    even though there is an
    elastic search
    ?
    b
    m
    r
    • 4
    • 6
  • q

    quaint-window-7517

    04/29/2022, 6:56 AM
    Hello guys, could anyone help with the AWS Kubernetes deployment issue (I am a newbie of EKS). Follow the guide I launched datahub in EKS, and expose using AWS load balancer, it's working fine. However, yesterday I had accidently deleted the AWS load balancer (the GMS one) from the AWS Console, how can I re-create those ? Now I can't access to my GMS endpoint... 😢 I have tried to reinstall the alb-controller, and reset the ingress, but it seems not work, the removed ALB didn't come back....
    e
    • 2
    • 3
  • b

    bland-orange-13353

    04/29/2022, 9:54 AM
    This message was deleted.
    m
    c
    • 3
    • 2
  • c

    creamy-smartphone-10810

    04/29/2022, 9:58 AM
    Hello folks! I’m facing an issue deploying the datahub-helm on k8s with my own elasticsearch (using ECK operator), instead of the elastic that the prerequisites helm installs by default. Looks like datahub-gms is having troubles connecting to the elasticsearch I’ve provided, (but it’s strange that the
    elasticsearchSetupJob
    works perfectly fine). Does anyone knows how to solve this? It seems to be a security/auth issue, so I don’t know which channel is the best one to ask this. Issue has been reported on the datahub-helm repo (https://github.com/acryldata/datahub-helm/issues/17)
    e
    e
    • 3
    • 10
  • c

    creamy-van-28626

    05/04/2022, 1:31 PM
    Hi team I am trying to ingest Db2 lineage in datahub ? So what will be the target platform ?
    n
    h
    • 3
    • 6
  • m

    modern-belgium-81337

    05/04/2022, 10:21 PM
    Hi team, I’ve just finished deploying datahub to a k8s cluster, I’m trying to start ingesting but was wondering how do I point datahub to the newly deployed instance? I tried
    Copy code
    datahub init
    but not sure where to get the host from? Tried GMS’ endpoint :8080 but that didn’t work. Any pointers?
    h
    s
    • 3
    • 7
  • b

    bumpy-autumn-58063

    05/05/2022, 9:02 AM
    I use a docker deployed datahub.My component version as shown below : Docker version 1.13.1 acryl-datahub, version 0.8.33.3 hadoop3.1.4 hive3.1.0 My recipes is shown below :
    source:
    type: hive config: host_port: "192.168.127.137:2181" username: 'root' password: '123456' sink: type: "datahub-rest" config: server: "http://192.168.88.129:8080" My way of execution :
    python -m  datahub ingest -c hive_to_datahub_rest.yml
    But the error. Who can help diagnose the problem appears. Thank you for your attention!
    d
    • 2
    • 3
  • p

    prehistoric-salesclerk-23462

    05/05/2022, 11:54 AM
    Hi team, how can I specify admin rights to the users added in user.props? I see that user.props only have the user:pass key pair. I have added users in user.props but no user can add tags etc and I get this error.
    Failed to create & add tag: Unauthorized to perform this action. Please contact your DataHub administrator.
    How can I add permissions (admin) for the users.
    h
    • 2
    • 1
  • b

    boundless-advantage-58874

    05/05/2022, 12:59 PM
    #all-things-deployment I am planning to use Opensearch instead of out of the box Elasticsearch. Currently i see Elastic search APIs are supported by Opensearch but may not be the case in future. I see DataHub is tightly coupled with elasticsearch API (especially RestHighLevelClient). Do you have any plan to plug Opensearch service implementation? Or has anyone tried to plug opensearch along with elasticsearch so that we can switch over easily?
    h
    s
    l
    • 4
    • 3
  • h

    handsome-football-66174

    05/05/2022, 6:52 PM
    Hi Everyone, are we able to share the schema registry created for Datahub , with other applications ? Or would it affect Datahub.
    h
    m
    • 3
    • 6
  • f

    fast-ability-23281

    05/06/2022, 1:00 AM
    Hi! You might answered this question previously already... Is there a way for me to update DataHub applications with the latest release without trashing the data with K8S/Helm?
    h
    • 2
    • 1
  • b

    better-orange-49102

    05/06/2022, 5:41 AM
    for the data retention, i see that i can define entities and aspects and apply custom retention policies. For other entities not specified in the retention.yml, it is assumed to be infinite retention right? (I'm only looking to prune userCorpStatus aspects)
    h
    • 2
    • 3
  • c

    creamy-van-28626

    05/06/2022, 12:59 PM
    Hi team I have upgraded all the datahub images to latest version 0.8.34 but somehow mae and mce consumer is giving error Readiness probe failed and liveliness probe failed
    a
    s
    • 3
    • 9
1...111213...53Latest