busy-furniture-10879
12/14/2022, 2:42 PMgentle-camera-33498
07/20/2022, 7:11 PMdatahub delete --env PROD --entity_type assertion --hard
This will permanently delete data from DataHub. Do you want to continue? [y/N]: y
[2022-07-20 16:01:44,919] INFO {datahub.cli.delete_cli:234} - datahub configured with <https://------>-
[2022-07-20 16:01:46,660] INFO {datahub.cli.delete_cli:247} - Filter matched 0 entities. Sample: []
This will delete 0 entities. Are you sure? [y/N]: N
brainy-piano-85560
12/14/2022, 2:56 PMcolossal-hairdresser-6799
12/14/2022, 3:29 PMcolossal-hairdresser-6799
12/14/2022, 6:56 PMsalmon-area-51650
12/15/2022, 7:05 AMv0.9.3
using lookml
ingestion. This is the error:
[2022-12-15 06:56:05,712] ERROR {datahub.entrypoints:187} - Failed to configure source (lookml): 1 validation error for LookMLSourceConfig
base_folder
base_folder is not provided. Neither has a github deploy_key or deploy_key_file been provided (type=value_error)
However, deploy_key
is provided. This is my configuration:
source:
type: lookml
config:
parse_table_names_from_sql: false
github_info:
deploy_key: '${DEPLOY_KEY}'
repo: 'XXXXXXXX'
api:
base_url: '<https://XXXXX.eu.looker.com>'
client_secret: '${LOOKER_CLIENT_SECRET}'
client_id: XXXXXXXXXXX
project_name: my_project
pipeline_name: 'lookerml_production'
sink:
type: "datahub-rest"
config:
server: "<http://datahub-datahub-gms:8080>"
Any clue?
Thanks in advance!magnificent-lock-58916
12/15/2022, 8:46 AMthankful-diamond-10319
12/15/2022, 2:23 PMFailed
along with this extremely long error log. I am unable to identify the errors due to the length of the log and was wondering if any of these errors are critical and how to resolve them.flat-agency-53385
12/15/2022, 9:58 PMnull
. Has anyone run into this before?
here is my query
query table_with_terms($urn: String!) {
dataset(urn: $urn) {
urn
type
name
schemaMetadata(version: 0) {
fields {
fieldPath
label
description
glossaryTerms {
terms {
associatedUrn
term {
hierarchicalName
properties {
name
}
}
associatedUrn
}
}
}
}
}
}
broad-article-1339
12/15/2022, 10:40 PMenv
variable is ignored. Are there any known work arounds?witty-butcher-82399
12/16/2022, 10:51 AM• Ability to define Metadata Policies against multiple reosurces scoped to particular “Containers” (e.g. A “schema”, “database”, or “collection”)Just going higher in the hierarchy up to the platform instance level. In the current status, I was thinking on solving this with
resource_urn
criteria https://datahubproject.io/docs/authorization/policies#resources
Does that criteria support other operators different from EQUALS
such as starts with, contains or even regexp? Definitely in the UI this is not possible, is it possible via policy as code?
Second question is about applying to owners.
Whether this policy should be apply to owners of the Metadata asset. If true, those who are marked as owners of a Metadata Asset, either directly or indirectly via a Group, will have the selected privileges.Can this be restricted to some ownership in particular? Thanks
best-umbrella-88325
12/16/2022, 11:27 AMWARNING: Some service image(s) must be built from source by running:
docker compose build %s elasticsearch-setup datahub-frontend-react kafka-setup
Error response from daemon: manifest for linkedin/datahub-elasticsearch-setup:debug not found: manifest unknown: manifest unknown
Upon seeing the output of docker images, we are able to see this image:
$ docker images | grep debug
linkedin/datahub-frontend-react debug 912fbfb19f09 2 hours ago 182MB
linkedin/datahub-kafka-setup debug 796dcb017865 2 hours ago 673MB
linkedin/datahub-elasticsearch-setup debug 482e6a255771 2 hours ago 23.1MB
linkedin/datahub-gms debug 2d51843ad479 10 months ago 292MB
Can anyone help us out on this? Any help would be appreciated. Thankscold-father-66356
12/16/2022, 12:50 PMaloof-iron-76856
12/16/2022, 1:17 PM...
ConfigurationError: You seem to have connected to the frontend instead of the GMS endpoint. The rest emitter should connect to DataHub GMS (usually <datahub-gms-host>:8080) or Frontend GMS API (usually <frontend>:9002/api/gms)
...
We tried changing addresses as prompted - no luck.
Actual datahub init:best-market-29539
12/16/2022, 3:24 PMwonderful-hair-89448
12/16/2022, 5:05 PMwonderful-hair-89448
12/16/2022, 5:58 PMbitter-lawyer-49179
12/19/2022, 8:53 AMbusy-analyst-35820
12/19/2022, 10:18 AMbreezy-portugal-43538
12/19/2022, 10:42 AMTrainingDataClass(
trainingData=training_data
)
But it doesn't work, my training_data is just a list with dictionaries containing the key, value pairs like in this example below:
https://github.com/datahub-project/datahub/blob/a121aff6eb2814dc6d15c4406793ad1bd9[…]5eed3e/metadata-ingestion/examples/mce_files/bootstrap_mce.jsonsalmon-angle-92685
12/19/2022, 1:07 PMCaught exception while attempting to handle SSO callback! It's likely that SSO integration is mis-configured
Error in our Auth0 logs: Parameter 'code_verifier' is required
Anyone else is facing this problem since friday?
Thanks !lively-minister-2773
12/19/2022, 1:38 PMmicroscopic-mechanic-13766
12/19/2022, 3:54 PMelegant-salesmen-99143
12/20/2022, 7:45 AMmicroscopic-mechanic-13766
12/20/2022, 10:20 AMSign Out
, if I use the back arrow of the browser to attempt to go back to the home page of Datahub I am able to access (if the user I logged in with is of my OIDC provider, which in my case is Keycloak).
This doesn't happen with the users created inside of Datahub.silly-ability-65278
12/20/2022, 10:39 AMfaint-actor-78390
12/20/2022, 12:04 PMeager-vase-41681
12/20/2022, 12:18 PMlemon-lock-89160
12/20/2022, 2:59 PM{datahub.cli.ingest_cli:120} - Starting metadata ingestion\n'
'[2022-12-20 14:39:38,569] ERROR {snowflake.connector.ocsp_snowflake:1490} - Failed to get OCSP response after 1 attempt. Consider '
'checking for OCSP URLs being blocked\n'
'[2022-12-20 14:39:38,570] ERROR {snowflake.connector.ocsp_snowflake:1065} - WARNING!!! Using fail-open to connect. Driver is '
'connecting to an HTTPS endpoint without OCSP based Certificate Revocation checking as it could not obtain a valid OCSP Response to use '
'from the CA OCSP responder. Details: \n'
" {'driver': 'PythonConnector', 'version': '2.9.0', 'eventType': 'RevocationCheckFailure', 'eventSubType': "
"'OCSPResponseFailedToConnectCacheServer|OCSPResponseFetchFailure', 'sfcPeerHost':
broad-article-1339
12/20/2022, 3:01 PMElasticsearchException[Elasticsearch exception [type=document_missing_exception, reason=[_doc][urn%3Ali%3Aassertion%3A5cc4c9d82c421bcc03798c6c42b7010f]: document missing]]]
Has anyone come across this issue?