https://datahubproject.io logo
Join Slack
Powered by
# all-things-deployment
  • w

    white-horse-97256

    03/03/2023, 9:16 PM
    Hi Team, I am facing CreateConfigError for postgres-setup pod
    a
    b
    • 3
    • 4
  • c

    cuddly-arm-8412

    03/06/2023, 8:32 AM
    hi,team. when i run python3 -m datahub docker quickstart. and i visit http://localhost:9002 in your browser ,it prompts
    b
    • 2
    • 5
  • a

    agreeable-belgium-70840

    03/06/2023, 9:51 AM
    Hello guys, I am trying to upgrade to the latest version ( v0.10.0 ) from 0.9.5 . I had some issues, but it turns out that the reindexing wasn't taking place. I am now getting this error in the datahub-system-upgrade job. Any ideas?
    Copy code
    2023-03-05 04:13:09.842  WARN 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Document counts do not match 620459 != 620269. Complete: 99.969376%
    2023-03-05 04:14:09.842  INFO 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Reindexing from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 in progress...
    2023-03-05 04:15:09.876  WARN 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Document counts do not match 620463 != 620269. Complete: 99.968735%
    2023-03-05 04:16:09.876  INFO 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Reindexing from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 in progress...
    2023-03-05 04:17:09.910  WARN 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Document counts do not match 620469 != 620269. Complete: 99.967766%
    2023-03-05 04:18:09.910  INFO 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Reindexing from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 in progress...
    2023-03-05 04:19:09.946  WARN 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Task: VLmONZTSQhuHKEIcvnripQ:26692411 - Document counts do not match 620496 != 620269. Complete: 99.96342%
    2023-03-05 04:20:09.946 ERROR 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Index: dataprocessinstanceindex_v2 - Post-reindex document count is different, source_doc_count: 620496 reindex_doc_count: 620269
    2023-03-05 04:20:09.946 ERROR 1 --- [           main] c.l.m.s.e.indexbuilder.ESIndexBuilder    : Failed to reindex dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472: Exception java.lang.RuntimeException: Reindex from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 failed. Document count 620496 != 620269
    2023-03-05 04:20:10.355 ERROR 1 --- [           main] c.l.d.u.s.e.steps.BuildIndicesStep       : BuildIndicesStep failed.
    
    java.lang.RuntimeException: java.lang.RuntimeException: Reindex from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 failed. Document count 620496 != 620269
    	at com.linkedin.metadata.search.elasticsearch.indexbuilder.ESIndexBuilder.buildIndex(ESIndexBuilder.java:213) ~[metadata-io.jar!/:na]
    	at com.linkedin.metadata.search.elasticsearch.indexbuilder.EntityIndexBuilders.reindexAll(EntityIndexBuilders.java:26) ~[metadata-io.jar!/:na]
    	at com.linkedin.metadata.search.elasticsearch.ElasticSearchService.configure(ElasticSearchService.java:41) ~[metadata-io.jar!/:na]
    	at com.linkedin.metadata.search.elasticsearch.ElasticSearchService.reindexAll(ElasticSearchService.java:51) ~[metadata-io.jar!/:na]
    	at com.linkedin.datahub.upgrade.system.elasticsearch.steps.BuildIndicesStep.lambda$executable$0(BuildIndicesStep.java:36) ~[classes!/:na]
    	at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeStepInternal(DefaultUpgradeManager.java:106) ~[classes!/:na]
    	at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeInternal(DefaultUpgradeManager.java:65) ~[classes!/:na]
    	at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeInternal(DefaultUpgradeManager.java:39) ~[classes!/:na]
    	at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.execute(DefaultUpgradeManager.java:30) ~[classes!/:na]
    	at com.linkedin.datahub.upgrade.UpgradeCli.run(UpgradeCli.java:80) ~[classes!/:na]
    	at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:812) ~[spring-boot-2.5.12.jar!/:2.5.12]
    	at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:796) ~[spring-boot-2.5.12.jar!/:2.5.12]
    	at org.springframework.boot.SpringApplication.run(SpringApplication.java:346) ~[spring-boot-2.5.12.jar!/:2.5.12]
    	at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:143) ~[spring-boot-2.5.12.jar!/:2.5.12]
    	at com.linkedin.datahub.upgrade.UpgradeCliApplication.main(UpgradeCliApplication.java:23) ~[classes!/:na]
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
    	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
    	at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
    	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[datahub-upgrade.jar:na]
    	at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[datahub-upgrade.jar:na]
    	at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[datahub-upgrade.jar:na]
    	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[datahub-upgrade.jar:na]
    Caused by: java.lang.RuntimeException: Reindex from dataprocessinstanceindex_v2 to dataprocessinstanceindex_v2_1677961107472 failed. Document count 620496 != 620269
    	at com.linkedin.metadata.search.elasticsearch.indexbuilder.ESIndexBuilder.reindex(ESIndexBuilder.java:295) ~[metadata-io.jar!/:na]
    	at com.linkedin.metadata.search.elasticsearch.indexbuilder.ESIndexBuilder.buildIndex(ESIndexBuilder.java:211) ~[metadata-io.jar!/:na]
    	... 22 common frames omitted
    
    Failed Step 2/5: BuildIndicesStep. Failed after 0 retries.
    Exiting upgrade SystemUpdate with failure.
    Upgrade SystemUpdate completed with result FAILED. Exiting...
    a
    • 2
    • 3
  • g

    gifted-diamond-19544

    03/06/2023, 12:06 PM
    Hello all. We are trying to update our Datahub version from 0.9.5 to 0.10. We have our infrastructure deployed on AWS, via CDK. The GMS, Actions, Frontend and Schema Registry run as containers on ECS, while mysql, kafka and elastic search run on the dedicated services. When I run the command:
    Copy code
    docker run --env-file docker_env.env  acryldata/datahub-upgrade:v0.10.0 -u SystemUpdate
    We get the error
    factory.entity.EbeanServerFactory  : Failed to connect to the server. Is it up?
    (Full log in the comment). This seems to indicate that it cannot connect to our db service. However, I can connect to our mysql database by using the mysql client on the cli. Also, the mysql-setup container runs with the same variables values which are failing on the datahub-upgrade container. I will post the variables we are using in the comments below. What can we do to debug this?
    ✅ 2
    c
    f
    w
    • 4
    • 11
  • c

    calm-dinner-63735

    03/06/2023, 12:38 PM
    i am getting below error while upgrade my datahub cli
    plus1 1
    a
    • 2
    • 2
  • c

    creamy-van-28626

    03/06/2023, 1:23 PM
    Hi guys When you will be releasing the next upgrade as we are expecting few vulnerabilities to get resolved in upcoming version?
    g
    c
    a
    • 4
    • 15
  • w

    wonderful-spring-3326

    03/06/2023, 2:01 PM
    is there a way to export all the things in datahub to the file sink? (i.e. is there a
    datahub
    source?)
  • w

    wonderful-spring-3326

    03/06/2023, 2:03 PM
    or alternatively, is there a part I can easily backup (preferably human-readable in addition to machine readable) the data that's in datahub? in a way that datahub can read it again, i.e. file source
    a
    • 2
    • 5
  • e

    elegant-article-21703

    03/06/2023, 5:54 PM
    Hi everyone! I'm having issues with the password change of the frontend in
    v0.10.0
    . I have followed the instructions here to change the root Datahub password. However, I'm not able now to login to the frontend. The login page loads properly but, once I insert my login it shows a message of "An error ocurred" and the console returns the following error:
    Copy code
    Error: Could not find logged in user.
        at oa (useGetAuthenticatedUser.tsx:23:15)
        at is (SearchBar.tsx:208:21)
        at oi (react-dom.production.min.js:157:137)
        at Vc (react-dom.production.min.js:267:460)
        at Cs (react-dom.production.min.js:250:347)
        at Ms (react-dom.production.min.js:250:278)
        at ks (react-dom.production.min.js:250:138)
        at vs (react-dom.production.min.js:243:163)
        at react-dom.production.min.js:123:115
        at t.unstable_runWithPriority (scheduler.production.min.js:18:343)
    The pod logs return the following:
    Copy code
    2023-03-06 17:23:02,992 [main] INFO  o.a.kafka.common.utils.AppInfoParser - Kafka version: 2.3.0
    2023-03-06 17:23:02,992 [main] INFO  o.a.kafka.common.utils.AppInfoParser - Kafka commitId: fc1aaa116b661c8a
    2023-03-06 17:23:02,992 [main] INFO  o.a.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678123382990
    2023-03-06 17:23:03,000 [main] INFO  play.api.Play - Application started (Prod) (no global state)
    2023-03-06 17:23:03,293 [kafka-producer-network-thread | datahub-frontend] INFO  org.apache.kafka.clients.Metadata - [Producer clientId=datahub-frontend] Cluster ID: yrLVVjCzTV26WzHtU6FNQQ
    2023-03-06 17:23:03,326 [main] INFO  server.CustomAkkaHttpServer - Setting max header count to: 64
    2023-03-06 17:23:03,711 [main] INFO  play.core.server.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9002
    2023-03-06 17:23:56,636 [proxyClient-akka.actor.default-dispatcher-5] INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
    2023-03-06 17:24:49,380 [application-akka.actor.default-dispatcher-11] INFO  org.eclipse.jetty.util.log - Logging initialized @112399ms to org.eclipse.jetty.util.log.Slf4jLog
    2023-03-06 17:24:49,413 [application-akka.actor.default-dispatcher-11] WARN  o.e.j.j.spi.PropertyFileLoginModule - Exception starting propertyUserStore /etc/datahub/plugins/frontend/auth/user.props
    The image shows the Network tab. Any help on this issue is highly appreciated! Thank you all in advance!
    a
    s
    i
    • 4
    • 9
  • h

    handsome-flag-16272

    03/06/2023, 5:57 PM
    Hi team, My local build is really slow. I started a build with command “./gradlew -x test clean build”. After 26 minutes it still ongoing. Here is the the build log. It stuck here.
    Copy code
    INFO: pip is looking at multiple versions of flask-appbuilder to determine which version is compatible with other requirements. This could take a while.
      Using cached apache_airflow-2.4.2-py3-none-any.whl (6.5 MB)
    <============-> 95% EXECUTING [27m 8s]
    > :metadata-ingestion-modules:airflow-plugin:installDev
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    > IDLE
    a
    a
    b
    • 4
    • 13
  • w

    white-horse-97256

    03/06/2023, 9:48 PM
    Hi Team, a question regarding kafka-broker, do we need a give a zookeeper server url? or is this optional? https://github.com/acryldata/datahub-helm/blob/a7d4a5240c5d706023844412750a216becb12bf0/charts/datahub/values.yaml#L234 if we are using our own kafka-servers?
    ✅ 1
    a
    a
    • 3
    • 4
  • c

    cuddly-arm-8412

    03/07/2023, 1:02 AM
    hi,team.when i quickstart.i found that my kafka prompts
    [2023-03-06 10:42:43,603] ERROR Error while loading log dir /var/lib/kafka/data (kafka.server.LogDirFailureChannel)
    java.nio.file.AccessDeniedException: /var/lib/kafka/data/__consumer_offsets-31/00000000000000000000.log
    a
    a
    • 3
    • 5
  • c

    cuddly-arm-8412

    03/07/2023, 8:34 AM
    hi,team.My downstream blood entities exceed 1000+,when I expand level 5, the error is reported as follows RejectedExecutionException
    Copy code
    @Nonnull
    private Result runQuery(@Nonnull Statement statement) {
      log.debug(String.format("Running Neo4j query %s", statement.toString()));
      try (Timer.Context ignored = MetricUtils.timer(this.getClass(), "runQuery").time()) {
        return _driver.session(_sessionConfig).run(statement.getCommandText(), statement.getParams());
      }
    }
    14:36:11.754 [ForkJoinPool.commonPool-worker-164] ERROR c.l.d.g.e.DataHubDataFetcherExceptionHandler:21 - Failed to execute DataFetcher
    java.util.concurrent.CompletionException: java.util.concurrent.RejectedExecutionException: Thread limit exceeded replacing blocked worker at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606) at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Caused by: java.util.concurrent.RejectedExecutionException: Thread limit exceeded replacing blocked worker at java.util.concurrent.ForkJoinPool.tryCompensate(ForkJoinPool.java:2011) at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3310) at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) at org.neo4j.driver.internal.util.Futures.blockingGet(Futures.java:128) at org.neo4j.driver.internal.InternalSession.run(InternalSession.java:69) at org.neo4j.driver.internal.InternalSession.run(InternalSession.java:51) at org.neo4j.driver.internal.AbstractQueryRunner.run(AbstractQueryRunner.java:37) at org.neo4j.driver.internal.AbstractQueryRunner.run(AbstractQueryRunner.java:43) at com.linkedin.metadata.graph.neo4j.Neo4jGraphService.runQuery(Neo4jGraphService.java:329) at com.linkedin.metadata.graph.neo4j.Neo4jGraphService.findRelatedEntities(Neo4jGraphService.java:159) at com.linkedin.metadata.graph.GraphService.getLineage(GraphService.java:163) at com.linkedin.metadata.graph.GraphService.getLineage(GraphService.java:98) at com.linkedin.metadata.graph.SiblingGraphService.getLineage(SiblingGraphService.java:54) at com.linkedin.datahub.graphql.resolvers.load.EntityLineageResultResolver.lambda$get$0(EntityLineageResultResolver.java:54) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ... 5 common frames omitted
    a
    a
    • 3
    • 9
  • h

    handsome-flag-16272

    03/07/2023, 10:01 PM
    Hi team, When I run the build command “./gradlew quickstartDebug” on master branch, it get an error below:
    Copy code
    Summary of all failing tests
    FAIL src/app/shared/time/__tests__/timeUtils.test.tsx
      ● timeUtils › addInterval › add date interval works correctly
    
        expect(received).toEqual(expected) // deep equality
    
        Expected: 1679661504000
        Received: 1679657904000
    
           8 |             const afterAdd = addInterval(1, input, DateInterval.Month);
           9 |             const expected = new Date(1679661504000);
        > 10 |             expect(afterAdd.getTime()).toEqual(expected.getTime());
             |                                        ^
          11 |         });
          12 |     });
          13 | });
    
          at Object.<anonymous> (src/app/shared/time/__tests__/timeUtils.test.tsx:10:40)
    
    
    Test Suites: 1 failed, 53 passed, 54 total
    Tests:       1 failed, 243 passed, 244 total
    Snapshots:   0 total
    Time:        67.997 s
    Ran all test suites.
    error Command failed with exit code 1.
    
    > Task :datahub-web-react:yarnTest FAILED
    
    FAILURE: Build failed with an exception.
    
    * What went wrong:
    Execution failed for task ':datahub-web-react:yarnTest'.
    > Process 'command '/Users/anquanliu/Documents/project/datahub_origin/datahub-web-react/.gradle/yarn/yarn-v1.22.0/bin/yarn'' finished with non-zero exit value 1
    ✅ 1
    a
    b
    c
    • 4
    • 9
  • m

    mysterious-hamburger-10596

    03/07/2023, 11:36 PM
    @astonishing-answer-96712 added a workflow to this channel: Community Support Bot.
  • f

    fierce-forest-92066

    03/08/2023, 4:23 AM
    Hi! If I'm working locally on a Docker image but want to productionize DataHub in the future, am I able to transfer the data over? I don't see a Doc on this particular issue
    ✅ 1
    b
    • 2
    • 1
  • b

    blue-microphone-24514

    03/08/2023, 11:28 AM
    Deployed DH on Kubernetes following the quickstart, running fine. How do I change the default admin's password ? Reset password only works for others users, not for the root one
    ✅ 1
    a
    • 2
    • 2
  • l

    lemon-scooter-69730

    03/08/2023, 11:51 AM
    It looks like if you mount a custom user.props file they don't have super user access... is there a setting I need to put in in addition to the file?
    ✅ 1
    a
    • 2
    • 3
  • w

    witty-toddler-69828

    03/08/2023, 4:37 PM
    Hello, We are deploying datahub to AWS ECS, using the managed services for MySQL, Kafka and ElasticSearch. The GMS services do not seem to be starting properly. We are getting errors like this on the GMS containers:
    Copy code
    2023-03-08 16:25:37,830 [R2 Nio Event Loop-1-1] WARN c.l.r.t.h.c.c.ChannelPoolLifecycle:139 - Failed to create channel, remote=localhost/127.0.0.1:8080
    
    2023-03-08T16:25:37.830+00:00	io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8080
    
    2023-03-08T16:25:37.830+00:00	Caused by: java.net.ConnectException: Connection refused
    From the React Front End a user cannot login and we get:
    Copy code
    Caused by: org.apache.http.conn.HttpHostConnectException: Connect to Our_Host_Name:8080 [Our_Host_Name/10.89.214.142] failed: Connection refused (Connection refused)
    Anyone got any thoughts?
    a
    a
    +6
    • 9
    • 26
  • s

    sparse-lighter-97287

    03/08/2023, 5:04 PM
    Hey DH Team, I am looking for a solution to get an initial bearer token without user interaction. I deploy DH 0.10.0 into my EKS stack and have a custom built app that needs a bearer token in order to make api calls. All deployments are automated and general users authenticate via keycloak (if that helps with setup). Any suggestions on a service account and automating this kind of capability? Appreciate any info, suggestions or feedback! (previously posted in #authentication-authorization, but makes more sense in #all-things-deployment )
    b
    a
    b
    • 4
    • 24
  • a

    astonishing-article-48608

    03/08/2023, 5:22 PM
    Hello DH team, We are integrating our Databricks Unity Catalog with Datahub to collect metadata. We have thousands of Databricks workspaces which will require thousands of recipie files, one per schema in a workspace. Is there a recommendation on how to centralize or move multiple schemas in a workspace using some aggregation scheme? Is there a roadmap for a feature to extract the entire catalog from a workspace in one go? Thanks.
    ✅ 1
    a
    g
    • 3
    • 2
  • i

    important-student-69487

    03/09/2023, 4:35 AM
    I am looking to deploy Datahub in our AWS environment, can someone help me with the approx cost monthly..?
    ✅ 1
    🩺 1
    a
    • 2
    • 3
  • i

    important-student-69487

    03/09/2023, 4:36 AM
    We are on Snowflake using dbt and fivetran..
  • a

    able-city-76673

    03/09/2023, 6:05 AM
    Hello, we have deplopyed datahub in azure kubernetes service. we aren't able to configure ingress as getting 404. is there any document on deploying datahub on azure or helping in ingress configuration for azure application gateway?
    ✅ 1
    a
    • 2
    • 1
  • b

    blue-engineer-74605

    03/09/2023, 2:06 PM
    Hey folks! We are currently running Datahub on K8s and we are about to launch the tool, therefore, scaling up seems to be a good ideia to avoid downtimes or any slowness. Is it ok to scale datahub horizontally? just by increasing the number of replicas for the frontend and gms pods? Or am I missing something here?
    ✅ 1
    a
    g
    • 3
    • 3
  • l

    lemon-scooter-69730

    03/09/2023, 2:16 PM
    Hello, in the helm charts where is the storage configured for prerequisite-mysql
    ✅ 1
    a
    • 2
    • 1
  • g

    gifted-diamond-19544

    03/09/2023, 3:14 PM
    Hello all! Is there any way I can activate the query tab for Athena datasets? I would like to see the list of queries someone makes to a particular Athena table. If this is not possible to do via the ingestion recipes, is there anyway I can use the graphql to upload a list of queries to a given Athena dataset that then get shown on the query tab? Thank you 🙂 cc: @careful-garden-46928
    ✅ 1
    a
    • 2
    • 2
  • w

    wonderful-spring-3326

    03/10/2023, 8:08 AM
    has anyone ever tapped into the Metadata Change Log events and made them back into something that can be read in the file source?
    a
    • 2
    • 1
  • g

    gifted-diamond-19544

    03/10/2023, 10:23 AM
    Hello all! Is there anyway I can create users, and generate an access token for this user via graphql api? Basically I would like to use my administrator account to create new user’s and create an access token for this user all via graphql. As far as I see, there is no
    createUser
    mutation on the Api reference. Thank you 🙂 cc: @careful-garden-46928
    ✅ 1
    a
    • 2
    • 2
  • m

    mammoth-needle-20408

    03/10/2023, 3:36 PM
    Ciao all DataHub! I am currently looking to deploy DataHub in a Kubernetes cluster that will interface with resources deployed in AWS. I am using helm chart and currently getting an error in the datahubSystemUpdate Job. From the Logs it looks like the connection is going into timeout but from the pod itself I can reach the services in AWS and my schema-registry pod. Moreover, I am using a schema-registry that I had previously deployed in my cluster, this one has basic auth and the kafka.schemaregistry.type head I should set it to "confluent" (no KAFKA and not AWS_GLUE). I tested with a curl from the pod and it seems to be getting through correctly. I also checked in our kafka and it seems that the job has correctly installed topics like _DataHubUpgradeHistory_v1_. Would you have any suggestions on this? Thank you!
    logs.txt
    a
    i
    • 3
    • 17
1...373839...53Latest