wonderful-quill-11255
02/25/2021, 8:07 AMincalculable-ocean-74010
02/26/2021, 5:19 PMgentle-exabyte-43102
02/26/2021, 8:30 PM$ ./docker/quickstart.sh
Pulling elasticsearch ... done
Pulling mysql ... done
Pulling elasticsearch-setup ... done
Pulling kibana ... done
Pulling neo4j ... done
Pulling zookeeper ... done
Pulling broker ... done
Pulling schema-registry ... done
Pulling schema-registry-ui ... done
Pulling kafka-setup ... done
Pulling datahub-mae-consumer ... done
Pulling kafka-rest-proxy ... done
Pulling kafka-topics-ui ... done
Pulling datahub-gms ... done
Pulling datahub-mce-consumer ... done
Pulling datahub-frontend ... done
Building elasticsearch-setup
Sending build context to Docker daemon 27.78MB
Step 1/10 : ARG APP_ENV=prod
Step 2/10 : FROM jwilder/dockerize:0.6.1 AS base
---> 849596ab86ff
Step 3/10 : RUN apk add --no-cache curl jq
---> Running in d6b9d0968be4
fetch <http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz>
WARNING: Ignoring <http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz>: temporary error (try again later)
fetch <http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz>
WARNING: Ignoring <http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz>: temporary error (try again later)
ERROR: unsatisfiable constraints:
curl (missing):
required by: world[curl]
jq (missing):
required by: world[jq]
The command '/bin/sh -c apk add --no-cache curl jq' returned a non-zero code: 2
curved-magazine-23582
03/01/2021, 2:56 AMbig-carpet-38439
03/01/2021, 7:03 PMmammoth-bear-12532
acceptable-architect-70237
03/02/2021, 4:39 PMdata replay strategy
. for example, in our case, we need to calculate the dataset's data quality. The data quality is calculated based on the aspects of a dataset. Since all datasets are already in datastore (MySQL, Neo4j and Elastic Search), we need to one way to pull data and do the calculation. Right now we are pulling data from MySQL using Python script. Do you guys have some suggestions?incalculable-ocean-74010
03/02/2021, 5:29 PMnutritious-bird-77396
03/03/2021, 12:04 AMMLModels
Client where the Snapshot aspects array is empty in here - https://github.com/linkedin/datahub/blob/master/gms/impl/src/main/java/com/linkedin/metadata/resources/ml/MLModels.java#L121
Any clues on where the issue might be?gentle-exabyte-43102
03/04/2021, 12:11 AMdatahub
, browsing to /browse/datasets
i see "An error occurred. Please try again shortly." and in the console a request to api/v2/browse?type=dataset&count=100&start=0
is a 400 with "Bad Request. type parameter can not be null"curved-crayon-1929
03/04/2021, 7:28 AM./docker/quickstart.sh
it got stuck as below and keep repating the same can someone help menutritious-bird-77396
03/04/2021, 10:15 PMhigh-hospital-85984
03/05/2021, 11:22 AMbillions-scientist-31934
03/06/2021, 1:26 PMmammoth-bear-12532
incalculable-ocean-74010
03/10/2021, 9:17 PMsome-crayon-90964
03/11/2021, 5:48 PMmammoth-bear-12532
master
! 🎉
• Please take it for a spin and let @big-carpet-38439 know if you run into any issues.
• We've tested it with Google SSO and Okta.
• Docs here: https://datahubproject.io/docs/how/configure-oidc-reactgentle-exabyte-43102
03/12/2021, 7:49 PMurn:li:dataset:(urn:li:dataPlatform:{platform},{dataset_name},PROD)
where platform seems to be an enum, something like hive, hdfs, kafka, mysql, etc.
is it possible to specify other values for platform?
can i supply whatever value i want? it seems like i can't, i'm getting pegasus errorsincalculable-ocean-74010
03/15/2021, 4:40 PMastonishing-yak-92682
03/15/2021, 4:46 PMcurved-magazine-23582
03/17/2021, 4:17 AMworried-flower-88750
03/17/2021, 10:24 PMmammoth-bear-12532
acoustic-printer-83045
03/21/2021, 10:29 PM./docker/quickstart.sh
When I try to fire up elasticsearch I get this (snipped) log:
elasticsearch | {"type": "server", "timestamp": "2021-03-21T22:25:21,301Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "uncaught exception in thread [main]",
elasticsearch | "stacktrace": ["org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?",
elasticsearch | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.9.3.jar:7.9.3]",
elasticsearch | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.9.3.jar:7.9.3]",
elasticsearch | "Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?",
I don't think this is caused by resource contention but I could be wrong.
Thanks!high-hospital-85984
03/22/2021, 10:58 AMincalculable-ocean-74010
03/23/2021, 2:29 PMsome-crayon-90964
03/23/2021, 2:39 PMsome-crayon-90964
03/23/2021, 2:39 PMmammoth-bear-12532
dbt
source last night. Thanks to great work by @acoustic-printer-83045! Please give it a spin in your dbt environment and let us know how it works for you! (https://datahubproject.io/docs/metadata-ingestion#dbt-dbt)