• c

    careful-engine-38533

    4 days ago
    Hi, I use helm to deploy DataHub, so far I used the internal mysql which is installed by helm prerequisite, now I want to use the external mysql - how to do this without data loss? any help?
    c
    i
    2 replies
    Copy to Clipboard
  • s

    shy-lion-56425

    4 days ago
    When deploying datahub via Helm. Is there an easy way to enable additional sources for the datahub-actions? Like say S3 data lake?
    s
    i
    3 replies
    Copy to Clipboard
  • v

    victorious-xylophone-76105

    4 days ago
    We have some issues, drilling down entities metadata from UI main page, it shows
    No entities
    in most deployment cases except default quickstart (quickstart with explicit version also has that problem). I have opened a bug https://github.com/datahub-project/datahub/issues/6014 . If anyone knows what the issue is and how to get around, please, let me know. We really need that feature.
  • t

    tall-butcher-30509

    3 days ago
    Is there any way to change to default scope of the visual lineage?
  • m

    microscopic-mechanic-13766

    5 days ago
    Good morning, so I have been looking through the new RBAC features and noticed that, although you can asign users different roles, they can all see all the datasets ingested into Datahub. Is there a possible way to determine who can be able to see certain datasets?? I am asking this because I think it is a key feature and don't know if it is implemented or not. Thanks in advance!!
    m
    e
    7 replies
    Copy to Clipboard
  • b

    big-carpet-38439

    3 days ago
    No - A task must be part of a parent Pipeline in DataHub's model
    b
    c
    3 replies
    Copy to Clipboard
  • c

    careful-engine-38533

    3 days ago
    Hi, my mongodb ingestion fails with the following message - any help?
    '/usr/local/bin/run_ingest.sh: line 40:    79 Killed                  ( datahub ingest run -c "${recipe_file}" ${report_option} )\n',
               "2022-09-22 06:29:49.739560 [exec_id=29430983-bfd2-4551-b153-c869537f5fe5] INFO: Failed to execute 'datahub ingest'",
               '2022-09-22 06:29:49.739831 [exec_id=29430983-bfd2-4551-b153-c869537f5fe5] INFO: Caught exception EXECUTING '
               'task_id=29430983-bfd2-4551-b153-c869537f5fe5, name=RUN_INGEST, stacktrace=Traceback (most recent call last):\n'
               '  File "/usr/local/lib/python3.9/site-packages/acryl/executor/execution/default_executor.py", line 122, in execute_task\n'
               '    self.event_loop.run_until_complete(task_future)\n'
               '  File "/usr/local/lib/python3.9/site-packages/nest_asyncio.py", line 89, in
    c
    i
    2 replies
    Copy to Clipboard
  • r

    rich-policeman-92383

    5 days ago
    Hello How can we disable HTTP trace/track HTTP methods for datahub mae and mce. This is reported by our infosec team as one of the vulnerabilities. datahub version : v0.8.41
    r
    i
    5 replies
    Copy to Clipboard
  • r

    rapid-book-98432

    2 days ago
    Hi hi. I'm facing a new problem deploying datahub with helm chart. Ilt seems to be linked to ES container setup job :
    helm install datahub datahub/datahub -n demo --version 0.2.83 --debug
    install.go:178: [debug] Original chart version: "0.2.83"
    install.go:195: [debug] CHART PATH: /home/cmo/.cache/helm/repository/datahub-0.2.83.tgz
    client.go:299: [debug] Starting delete for "datahub-elasticsearch-setup-job" Job
    client.go:128: [debug] creating 1 resource(s)
    client.go:529: [debug] Watching for changes to Job datahub-elasticsearch-setup-job with timeout of 5m0s
    client.go:557: [debug] Add/Modify event for datahub-elasticsearch-setup-job: ADDED
    client.go:596: [debug] datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
    client.go:557: [debug] Add/Modify event for datahub-elasticsearch-setup-job: MODIFIED
    client.go:596: [debug] datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 1, jobs succeeded: 0
    client.go:557: [debug] Add/Modify event for datahub-elasticsearch-setup-job: MODIFIED
    client.go:596: [debug] datahub-elasticsearch-setup-job: Jobs active: 1, jobs failed: 2, jobs succeeded: 0
    Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
    helm.go:84: [debug] failed pre-install: timed out waiting for the condition
    INSTALLATION FAILED
    main.newInstallCmd.func2
    helm.sh/helm/v3/cmd/helm/install.go:127
    github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.3.0/command.go:856
    github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.3.0/command.go:974
    github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.3.0/command.go:902
    main.main
    helm.sh/helm/v3/cmd/helm/helm.go:83
    runtime.main
    runtime/proc.go:255
    runtime.goexit
    runtime/asm_amd64.s:1581
    If you have any idea. Thanks ! N.B : The es setup job container having this log :
    2022/09/23 12:34:01 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.0.125.20:9200: connect: connection refused. Sleeping 1s
    2022/09/23 12:34:03 Problem with request: Get http://elasticsearch-master:9200: dial tcp 10.0.125.20:9200: connect: connection refused. Sleeping 1s
    2022/09/23 12:34:03 Timeout after 2m0s waiting on dependencies to become available: [http://elasticsearch-master:9200]
    Well known error here -_-
    r
    i
    24 replies
    Copy to Clipboard
  • l

    lemon-cat-72045

    2 days ago
    Hi, team. We have deployed Datahub with ES as its graph database backend. I'm wondering if there is guidance to migrate from ES to Neo4j without losing any data. Thanks in advance.
    l
    i
    3 replies
    Copy to Clipboard