https://datahubproject.io logo
Join Slack
Powered by
# troubleshoot
  • h

    happy-easter-36246

    03/30/2023, 3:22 AM
    Hi Team..I have noticed that Datahub graphQL search API is only considering fieldTags field for pulling schema field level tag details though I saw there in one more field editedFieldTags in datasetindex_v2 index saving the added tag details for any schema field. So we are getting partial result from graphQL api. Can u pls look and suggest is there any other alternative ways to get all schema level tag details for now. I have used this sample query for pulling out all schema level tags : { search(input: {type: DATASET, query: "editedFieldTags:not null", start: 0, count: 50, }) { start count total searchResults { entity { urn ... on Dataset { urn type subTypes { typeNames } name platform { name } properties { name } tags { tags { tag { urn properties { name description } } } } schemaMetadata { fields { fieldPath jsonPath label tags { tags { tag { urn properties { name description } } } } } } } } } } }
    a
    • 2
    • 4
  • n

    numerous-account-62719

    03/30/2023, 7:22 AM
    Hi Team I had applied lineage to a table in datahub, Now after 1 day when I am opening that table again, I am not able to see the lineage applied. How to resolve this issue??
    ✅ 1
    h
    d
    a
    • 4
    • 43
  • r

    red-plumber-64268

    03/30/2023, 7:59 AM
    resharing here as well in case anyone has any input 🙂
    ✅ 1
    a
    • 2
    • 1
  • g

    gifted-diamond-19544

    03/30/2023, 8:46 AM
    Hello all! I am having some trouble on Datahub, while creating Views on the UI. We are on version v0.10.0. Basically, I want to create a view to filter my search for a specific platform (Tableau). The problem is that when I try to select a Platform, no platform shows up (We have Tableau assets ingested). How can I solve this? Thanks
    a
    b
    • 3
    • 5
  • f

    fresh-cricket-75926

    03/30/2023, 11:17 AM
    Hi Community, I have a problem with stateful ingestion. we have enabled this feature for a while for redshift. The tables were getting synced before we decide to delete that table. once we decide to delete few tables and run the recipe ,but Datahub still displays tables that are gone. It says ‘Last synchronized 4 months ago” next to it, so we now that’s when they last existed, but it still doesn’t soft-delete them. we are on datahub v0.10.0 and cli version on 0.9.6.3
    a
    • 2
    • 2
  • f

    fancy-shoe-14428

    03/30/2023, 12:43 PM
    Hello everyone, Can somebody tell me why the
    datahub-actions
    container uses so much space? I am using the quickstart image and it rapidly increased to 15gb when I tried to ingest the tables I have on redshift… And I am not even using actions 😆 Any help would be appreciated 🫶
    a
    • 2
    • 7
  • b

    blue-honey-61652

    03/30/2023, 1:08 PM
    Hi everyone ! (Version : 0.8.45; Deployed on Azure K8s) I am experimenting the actionsFramework feature right now but I have trouble finding any (config file) example for the event type "MetadataChangeLog_v1", and I can't get it to work with the documentation alone. Does anyone has some working (config file) example please ?
    a
    • 2
    • 3
  • m

    most-nightfall-36645

    03/30/2023, 2:12 PM
    Hi I am having some trouble emitting metadata to our datahub instance. When I try to restore indices to our opensearch cluster the emitter jobs logs contain the following error:
    Copy code
    2023-03-30 13:40:08.175 ERROR 1 --- [ad | producer-1] c.l.m.dao.producer.KafkaHealthChecker    : Failed to emit MCL for entity urn:li:dataset:(urn:li:dataPlatform:XXXXXXX,PROD)
    
    org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
    I set the brokers
    max.message.bytes
    and
    replica.fetch.max.bytes
    to 1GB (much larger than intended final config) and set the emitter jobs
    SPRING_KAFKA_PRODUCER_PROPERTIES_MAX_REQUEST_SIZE
    environment variable to 1GB. However I still can emit data. I checked the size of the meta data and it is around 1.4MB. Am I missing something?
    a
    o
    +3
    • 6
    • 9
  • k

    kind-lifeguard-14131

    03/30/2023, 2:18 PM
    Hey everyone – I'm trying to ingest Datasets from azuresynapse to DH so that I get to see the lineage including the reports, pages and source tables. ATM the lineages appears and shows the power bi dataset but not the source table. If anyone has an approach, please let me know 🙂
    g
    • 2
    • 2
  • l

    loud-hospital-37195

    03/30/2023, 2:30 PM
    Hi we are trying to deploy datahub in kubernetes (Eks) however the pod prerequisites-neo4j-community-0 does not deploy and the other pods do not work. Do you know what is wrong?
    a
    • 2
    • 3
  • s

    some-mouse-2910

    03/30/2023, 5:41 PM
    Hi there, did this manual deletion break our Datahub ingestion table? Yesterday I rolled back my business-glossary ingestion in the UI, to ingest an updated version later. However, all future ingestions remained in the state 'pending'. Since there was no further progress, I took the URN of the rolled-back ingestion and manually deleted the URN via the OpenAPI delete endpoint. Now, all of our ingestions fail and our
    /ingestion
    site renders a white background throwing the following error in the console:
    Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'urn')
    . Ingesting datasets stopped working. Do we have to nuke our database and re-i-ngest all of our sources? Do you see any ways to recover from that? We want to roll out datahub to our 20k employees and I am right now ingesting APIs etc, but there seem to be some difficulties with that. Thank you for a response. The full stack trace:
    Copy code
    react-dom.production.min.js:216 TypeError: Cannot read properties of undefined (reading 'urn')
     at IngestionSourceTable.tsx:116:113
        at Array.map (<anonymous>)
        at YU (IngestionSourceTable.tsx:107:31)
    a
    • 2
    • 1
  • w

    wide-optician-47025

    03/30/2023, 5:51 PM
    hello, I am trying to add glossary terms to columns; even though I am able to search certain columns, when adding the glossary terms these are added to the dataset level not at the column level
    ✅ 1
    g
    • 2
    • 6
  • m

    miniature-journalist-76345

    03/31/2023, 7:28 AM
    Hi, team. Datahub version: v0.10.1 Deployment method: Docker There is no "Definition" tab (or how it is called?) in the Dataset details, and the button "See View Definition" not working. I've just updated my acryl-datahub and fully recreated datahub instance (did the docker system prune -a, and after that datahub docker quickstart). I can see no errors in docker logs. How can I find the cause of this issue?
    a
    g
    a
    • 4
    • 17
  • l

    limited-library-89060

    03/31/2023, 10:15 AM
    Hi, i'm trying to ingest a custom test results (not assertions, test) to Dataset using the script below. The script was successfully executed, and the test aspects was ingested into the gms backend. But the problem is that UI does not show any test details if all the test is passed, however it will show all the test details if at least 1 test is in
    failing
    . Any idea how to show all the test results if all the tests are passed ?
    ✅ 1
    a
    a
    +2
    • 5
    • 22
  • w

    white-grass-55842

    03/31/2023, 12:54 PM
    Hi @EVERYONE, New to datahub i need to disable rootuser (datahub )from halm chart tried to add & commented out, but no use datahub: useRootUser: false #extra volumes to create users # extraVolumes: # - name: datahub-users # secret: # defaultMode: 0444 # secretName: datahub-users-secret # extraVolumeMounts: # - name: datahub-users # mountPath: /etc/datahub/plugins/frontend/auth/user.props # subPath: user.props Thanks
    ✅ 1
    a
    • 2
    • 1
  • a

    abundant-airport-72599

    03/31/2023, 5:56 PM
    hey all, I recently enabled stateful ingestion on the Trino source, today when the job ran our Trino cluster was down, the job was marked as “Failed” in the ingestion UI, but it went ahead and soft-deleted all Trino entities. My expectation would be that things would halt if the job fails and it would not continue to soft delete everything. Is this a known bug? We’re on version 0.9.5 if that’s relevant but I haven’t found anything about this in future version release notes
    a
    • 2
    • 5
  • s

    shy-dog-84302

    04/01/2023, 3:15 AM
    Hi! DataHub Java client library(
    io.acryl:datahub-client:0.10.1
    ) reports security vulnerabilities. Is there any plan/workaround to fix this?
    Copy code
    datahub-client-0.10.1.jar/META-INF/maven/org.apache.avro/avro/pom.xml (pkg:maven/org.apache.avro/avro@1.7.7, cpe:2.3:a:apache:avro:1.7.7:*:*:*:*:*:*:*) : CVE-2021-43045
    datahub-client-0.10.1.jar/META-INF/maven/org.apache.commons/commons-text/pom.xml (pkg:maven/org.apache.commons/commons-text@1.8, cpe:2.3:a:apache:commons_text:1.8:*:*:*:*:*:*:*) : CVE-2022-42889
    a
    o
    • 3
    • 3
  • f

    future-florist-65080

    04/02/2023, 10:48 PM
    Hi All, We have SSO setup on our DataHub Account. When new users access they are assigned
    No Role
    . Is it possible to have all users default to
    Reader
    ?
    a
    a
    • 3
    • 3
  • f

    few-sunset-43876

    04/03/2023, 4:29 AM
    Hi folks, I recently ingest the metadata using connector. After creating ingestion source, there is nothing showed up even the successful notification appeared. I have run
    docker system prune
    and restart Elasticsearch and gms container to clean up. docker system df:
    Copy code
    TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
    Images          25        8         16.19GB   7.369GB (45%)
    Containers      8         8         929.7MB   0B (0%)
    Local Volumes   147       9         26.47GB   24.84GB (93%)
    Build Cache     25        0         0B        0B
    The logs from Elasticsearch:
    Copy code
    {"type": "server", "timestamp": "2023-04-03T04:25:30,850Z", "level": "WARN", "component": "o.e.c.r.a.DiskThresholdMonitor", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "flood stage disk watermark [95%] exceeded on [CA9sNvGOSr2sNBFRMst0JQ][elasticsearch][/usr/share/elasticsearch/data/nodes/0] free: 9.3gb[1.9%], all indices on this node will be marked read-only", "cluster.uuid": "M3xdtmw8TFCGL_RqIP650Q", "node.id": "CA9sNvGOSr2sNBFRMst0JQ"  }
    {"type": "server", "timestamp": "2023-04-03T04:26:00,855Z", "level": "WARN", "component": "o.e.c.r.a.DiskThresholdMonitor", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "flood stage disk watermark [95%] exceeded on [CA9sNvGOSr2sNBFRMst0JQ][elasticsearch][/usr/share/elasticsearch/data/nodes/0] free: 9.3gb[1.9%], all indices on this node will be marked read-only", "cluster.uuid": "M3xdtmw8TFCGL_RqIP650Q", "node.id": "CA9sNvGOSr2sNBFRMst0JQ"  }
    {"type": "server", "timestamp": "2023-04-03T04:26:30,857Z", "level": "WARN", "component": "o.e.c.r.a.DiskThresholdMonitor", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "flood stage disk watermark [95%] exceeded on [CA9sNvGOSr2sNBFRMst0JQ][elasticsearch][/usr/share/elasticsearch/data/nodes/0] free: 9.3gb[1.9%], all indices on this node will be marked read-only", "cluster.uuid": "M3xdtmw8TFCGL_RqIP650Q", "node.id": "CA9sNvGOSr2sNBFRMst0JQ"  }
    Logs from gms:
    Copy code
    04:26:25.397 [ThreadPoolTaskExecutor-1] INFO  c.l.m.k.h.i.IngestionSchedulerHook:56 - Received UPSERT to Ingestion Source. Rescheduling the source (if applicable). urn: urn:li:dataHubIngestionSource:3b33a8fc-b106-460f-b90b-3ca816c77910, key: {value=ByteString(length=45,bytes=7b226964...3130227d), contentType=application/json}.
    04:26:25.398 [ThreadPoolTaskExecutor-1] INFO  c.d.m.ingestion.IngestionScheduler:105 - Unscheduling ingestion source with urn urn:li:dataHubIngestionSource:3b33a8fc-b106-460f-b90b-3ca816c77910
    04:26:25.399 [ThreadPoolTaskExecutor-1] INFO  c.d.m.ingestion.IngestionScheduler:138 - Scheduling next execution of Ingestion Source with urn urn:li:dataHubIngestionSource:3b33a8fc-b106-460f-b90b-3ca816c77910. Schedule: 0 1 * * *
    04:26:25.400 [ThreadPoolTaskExecutor-1] INFO  c.d.m.ingestion.IngestionScheduler:167 - Scheduled next execution of Ingestion Source with urn urn:li:dataHubIngestionSource:3b33a8fc-b106-460f-b90b-3ca816c77910 in 48814600ms.
    04:26:27.448 [ThreadPoolTaskExecutor-1] INFO  c.l.m.k.t.DataHubUsageEventTransformer:74 - Invalid event type: CreateIngestionSourceEvent
    04:26:27.449 [ThreadPoolTaskExecutor-1] WARN  c.l.m.k.DataHubUsageEventsProcessor:56 - Failed to apply usage events transform to record: {"type":"CreateIngestionSourceEvent","sourceType":"bigquery","interval":"0 1 * * *","actorUrn":"urn:li:corpuser:datahub","timestamp":1680495987418,"date":"Mon Apr 03 2023 11:26:27 GMT+0700 (Indochina Time)","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36","browserId":"c7e9ab30-25a7-4874-a4e6-f2c694501ccc"}
    04:26:27.474 [qtp522764626-22] INFO  c.l.m.r.entity.AspectResource:143 - INGEST PROPOSAL proposal: {aspectName=dataHubExecutionRequestResult, entityKeyAspect={contentType=application/json, value=ByteString(length=46,bytes=7b226964...3463227d)}, entityType=dataHubExecutionRequest, aspect={contentType=application/json, value=ByteString(length=51,bytes=7b227374...3437307d)}, changeType=UPSERT}
    04:26:27.485 [pool-12-thread-1] INFO  c.l.m.filter.RestliLoggingFilter:55 - POST /aspects?action=ingestProposal - ingestProposal - 200 - 11ms
    04:26:27.495 [ThreadPoolTaskExecutor-1] INFO  c.l.m.k.t.DataHubUsageEventTransformer:74 - Invalid event type: ExecuteIngestionSourceEvent
    04:26:27.495 [ThreadPoolTaskExecutor-1] WARN  c.l.m.k.DataHubUsageEventsProcessor:56 - Failed to apply usage events transform to record: {"type":"ExecuteIngestionSourceEvent","actorUrn":"urn:li:corpuser:datahub","timestamp":1680495987487,"date":"Mon Apr 03 2023 11:26:27 GMT+0700 (Indochina Time)","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36","browserId":"c7e9ab30-25a7-4874-a4e6-f2c694501ccc"}
    04:26:28.968 [I/O dispatcher 1] ERROR c.l.m.s.e.update.BulkListener:25 - Failed to feed bulk request. Number of events: 4 Took time ms: -1 Message: failure in bulk execution:
    [0]: index [datahubingestionsourceindex_v2], type [_doc], id [urn%3Ali%3AdataHubIngestionSource%3A3b33a8fc-b106-460f-b90b-3ca816c77910], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubingestionsourceindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [1]: index [system_metadata_service_v1], type [_doc], id [D9J9LxuD6yBN4lJYY0FaMg==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [system_metadata_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [2]: index [datahubingestionsourceindex_v2], type [_doc], id [urn%3Ali%3AdataHubIngestionSource%3A3b33a8fc-b106-460f-b90b-3ca816c77910], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubingestionsourceindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [3]: index [system_metadata_service_v1], type [_doc], id [F7B9+ecZWomYYe5LYipdcA==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [system_metadata_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    04:26:29.528 [qtp522764626-17] INFO  c.l.m.r.entity.AspectResource:143 - INGEST PROPOSAL proposal: {aspectName=dataHubExecutionRequestResult, entityKeyAspect={contentType=application/json, value=ByteString(length=46,bytes=7b226964...3463227d)}, entityType=dataHubExecutionRequest, aspect={contentType=application/json, value=ByteString(length=389,bytes=7b227374...3437307d)}, changeType=UPSERT}
    04:26:29.555 [pool-12-thread-1] INFO  c.l.m.filter.RestliLoggingFilter:55 - POST /aspects?action=ingestProposal - ingestProposal - 200 - 27ms
    04:26:29.749 [qtp522764626-17] INFO  c.l.m.r.entity.AspectResource:93 - GET ASPECT urn: urn:li:telemetry:clientId aspect: telemetryClientId version: 0
    04:26:29.752 [pool-12-thread-1] INFO  c.l.m.filter.RestliLoggingFilter:55 - GET /aspects/urn%3Ali%3Atelemetry%3AclientId?aspect=telemetryClientId&version=0 - get - 200 - 3ms
    04:26:31.566 [qtp522764626-23] INFO  c.l.m.r.entity.AspectResource:143 - INGEST PROPOSAL proposal: {aspectName=dataHubExecutionRequestResult, entityKeyAspect={contentType=application/json, value=ByteString(length=46,bytes=7b226964...3463227d)}, entityType=dataHubExecutionRequest, aspect={contentType=application/json, value=ByteString(length=654,bytes=7b227374...3437307d)}, changeType=UPSERT}
    04:26:31.986 [I/O dispatcher 1] ERROR c.l.m.s.e.update.BulkListener:25 - Failed to feed bulk request. Number of events: 8 Took time ms: -1 Message: failure in bulk execution:
    [0]: index [datahubexecutionrequestindex_v2], type [_doc], id [urn%3Ali%3AdataHubExecutionRequest%3A67e66178-9cf9-40ac-8496-ac26761ba24c], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubexecutionrequestindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [1]: index [datahubexecutionrequestindex_v2], type [_doc], id [urn%3Ali%3AdataHubExecutionRequest%3A67e66178-9cf9-40ac-8496-ac26761ba24c], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubexecutionrequestindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [2]: index [graph_service_v1], type [_doc], id [t/DPfLYDIXzDBvDNKYzsBA==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [graph_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [3]: index [system_metadata_service_v1], type [_doc], id [mK6kjzTpk8iRRsx5Eg7tJA==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [system_metadata_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [4]: index [system_metadata_service_v1], type [_doc], id [yss7L2uhB/qEk71o8Vkc8w==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [system_metadata_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [5]: index [datahubexecutionrequestindex_v2], type [_doc], id [urn%3Ali%3AdataHubExecutionRequest%3A67e66178-9cf9-40ac-8496-ac26761ba24c], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubexecutionrequestindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [6]: index [datahubexecutionrequestindex_v2], type [_doc], id [urn%3Ali%3AdataHubExecutionRequest%3A67e66178-9cf9-40ac-8496-ac26761ba24c], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [datahubexecutionrequestindex_v2] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    [7]: index [system_metadata_service_v1], type [_doc], id [QVHUM/x8ssrKDaAcXWOgng==], message [ElasticsearchException[Elasticsearch exception [type=cluster_block_exception, reason=index [system_metadata_service_v1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];]]]
    04:26:31.987 [pool-12-thread-1] INFO  c.l.m.filter.RestliLoggingFilter:55 - POST /aspects?action=ingestProposal - ingestProposal - 200 - 421ms
    I run datahub locally. Is it because of resource issue? how can I check and solve it? Thanks in advance!
    ✅ 1
    f
    • 2
    • 2
  • v

    victorious-planet-2053

    04/03/2023, 10:12 AM
    Hello! I'm trying to run
    datahub docker quickstart
    and recive error: Unable to run quickstart - the following issues were detected: - datahub-frontend-react is running by not yet healthy - datahub-gms is still starting - broker is not running - elasticsearch-setup is still running - elasticsearch is running by not yet healthy Can someone help me please?
    ✅ 1
    👀 2
    i
    a
    • 3
    • 7
  • h

    helpful-quill-60747

    04/03/2023, 10:47 AM
    Hi all. Please guide how to run datahub in production environment without quickstart. Unfortunately if I run via docker-compose up then datahub-gms can't start on port 8080 for some reason
    a
    • 2
    • 1
  • s

    stale-minister-18858

    04/03/2023, 12:39 PM
    Hello, question about SSO connection. Is DataHub able to SSO connect with PingFederate?
    a
    • 2
    • 2
  • a

    agreeable-belgium-70840

    04/03/2023, 3:02 PM
    hello, the new version of datahub is timing out when trying to connect to MSK. I am having the exact same configuration compared to previous versions. The only difference that I can spot is this:
    Copy code
    client.dns.lookup = use_all_dns_ips
    That used to be
    Copy code
    client.dns.lookup = default
    Could it be the case? How can I change that? I tried passing the env variable below, but there wasn't any change:
    Copy code
    - name: KAFKA_CLIENT_DNS_LOOKUP
      value: "default"
    Moreover, the old one used to be like this:
    Copy code
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    How can I add TLSv1 support to the new one? Regards, Yianni
    a
    o
    • 3
    • 3
  • c

    calm-balloon-31412

    04/03/2023, 8:37 PM
    👋 Hello! I've been running into this error for a while and it's blocking my development 😕
    Copy code
    ANTLR Tool version 4.5 used for code generation does not match the current runtime version 4.8ANTLR Runtime version 4.5 used for parser compilation does not match the current runtime version 4.8ANTLR Tool version 4.5 used for code generation does not match the current runtime version 4.8ANTLR Runtime version 4.5 used for parser compilation does not match the current runtime version 4.82023/03/20 18:54:37 Command exited with error: exit status 1
    when I am trying to replace datahub-gms container with local code:
    Copy code
    (cd docker && COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose -p datahub -f docker-compose-without-neo4j.yml -f docker-compose-without-neo4j.override.yml -f docker-compose.dev.yml up -d --no-deps --force-recreate datahub-gms)
    Any recommendations? I've tried looking through other threads but have not found anything useful
    a
    o
    • 3
    • 15
  • a

    agreeable-table-54007

    04/04/2023, 9:14 AM
    Hello all! Hope you are fine. I'm trying to run
    datahub docker quickstart
    (windows VM and datahub version 0.10.1) and receive errors: Unable to run quickstart - the following issues were detected: - datahub-upgrade is still running - datahub-gms is still starting - elasticsearch-setup is still running - mysql-setup is still running - elasticsearch is running by not yet healthy 4months ago to discover datahub, it worked and now it's not so i did a nuke / prune and did the quickstart guide again but got the previous errors.. Can someone help me please?
    tmphadr2jpd.log
    👀 1
    ✅ 1
    a
    a
    • 3
    • 6
  • q

    quick-pizza-8906

    04/04/2023, 2:26 PM
    Hello, my team and I started to experience a worrying problem with GMS, we have a very particular load for GMS which consists of around 1000 GraphQL requests to get dataset details and each http request is followed by a MCE emitted to the Kafka queue changing dataset which we searched for. For some of the requests an ingress in front of GMS (nginx) returned 502 Bad Gateway status and we believe GMS simply did not accept connections. The problem does not seem to be related to the amount of gms instances we run. Overall load as reported by prometheus at 2m rate was 10 req/s so not much. My question is - is there any way to control GMS behavior under load? Some parameters we could try to tweak?
    a
    b
    a
    • 4
    • 12
  • w

    wonderful-quill-11255

    04/04/2023, 5:45 PM
    Hello ppl. We are trying to upgrade to v0.10.1 but are running into errors when trying to ingest from AWS glue. Was wondering if I could get any tips. The error message we see follows in the 🧵.
    b
    l
    +10
    • 13
    • 34
  • c

    clever-spring-20900

    04/04/2023, 11:05 PM
    Hi, I am struggle to ingest business glossary with recipe. It works on local but it won’t work in the prod with SSL. Getting Certification error requests.exceptions.SSLError: HTTPSConnectionPool(host=‘xxxxx, port=443): Max retries exceeded with url: /config (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)’))) Can you please help me to resolve this issue?
    ✅ 1
    b
    b
    • 3
    • 5
  • a

    astonishing-dusk-99990

    04/05/2023, 2:11 AM
    Hi, I have a problem regarding kafka Currently I’m deploying using helm chart via kubernetes and after I do
    helm upgrade chart
    my two pods error which are prerequisites-kafka and kafka-setup-job. I attached screenshoot and logs for two pods in case everyone know how to solve it Thank you. Notes: • Image datahub v0.10.0
    prerequisites-kafka-0.logdatahub-kafka-setup-job-6mtsk.log
    a
    • 2
    • 2
  • b

    busy-analyst-35820

    04/05/2023, 7:02 AM
    Hi Team, We recently upgraded Datahub version to v10.0 from v9.2. Post upgrade we could see few changes in UI. Now we notice that DataFlow entity shows zero task under it even when it has multiple tasks under it , where as task shows the parent dataflow entity as expected. Screen shots attached. We use all default settings. Can you please help us and suggest what should be done? Is there any change to be done in emitter, if so could you please suggest what needs to handled specifically for the version v10.0. cc: @melodic-match-38516 @tall-butcher-30509
    a
    • 2
    • 3
1...868788...119Latest