breezy-shoe-41523
10/07/2022, 10:08 AMdatahub-gms:
enabled: true
replicaCount: 3
resources:
limits:
cpu: 4
memory: 8Gi
and i found out that my gms gets faster when i increase limit but doesn’t reach that limit ( it only goes about ~400m)
do you know why gms doesn’t use full resource limit ?
why gms gets faster when limits grow even though it doesn’t use all that limit?
any guide will help.
thanksfresh-cricket-75926
10/07/2022, 12:43 PMwonderful-author-3020
10/07/2022, 4:19 PMactorUrn
property is some other account, but the ownerUrn
is always me.alert-traffic-45034
10/07/2022, 4:51 PMImportError: cannot import name 'AthenaTableMetadata' from 'pyathena.model'
witty-wall-84488
10/07/2022, 6:21 PMquery search_across_entities($input: SearchInput!) {
search(input: $input) {
count
total
searchResults {
entity {
urn
type
... on Dataset {
name
}
}
}
}
}
variables =
{
"input": {
"type": "DATASET",
"query": "",
"start": 0,
"count": 1000,
"filters": [{"field": "browsePaths", "value": "dev/tableau/some_project_name"} ]
}
}
microscopic-room-90690
10/08/2022, 8:06 AMfuture-hair-23690
10/10/2022, 7:12 AMsource:
type: mssql
config:
password: ---------
database: sandbox_validation
host_port: 'az-uk-mssql-accept-01.logex.cloud:1433'
username: ------
use_odbc: 'true'
uri_args:
driver: 'ODBC Driver 17 for SQL Server'
Encrypt: 'Yes'
TrustServerCertificate: 'Yes'
ssl: 'True'
env: STG
profiling:
enabled: true
limit: 10000
report_dropped_profiles: false
profile_table_level_only: false
include_field_null_count: true
include_field_min_value: true
include_field_max_value: true
include_field_mean_value: true
include_field_median_value: true
include_field_stddev_value: true
include_field_quantiles: true
include_field_distinct_value_frequencies: true
include_field_sample_values: true
turn_off_expensive_profiling_metrics: false
include_field_histogram: true
catch_exceptions: false
max_workers: 4
query_combiner_enabled: true
max_number_of_fields_to_profile: 100
profile_if_updated_since_days: null
partition_profiling_enabled: false
schema_pattern:
deny:
- DS\\oleksii
- ds*
- Logex*
allow:
- dbo.*
- dbo
cheers!microscopic-mechanic-13766
10/10/2022, 8:41 AMlinkedin/datahub-frontend-react:v0.8.45
, but I keep getting the error shown here.
Mention that the previous deployment (which was in version 0.8.44
) worked perfectly, so it is not exactly that the certificate is in a bad format. Is this a known error ??
Note: the certificate that is failing is the one needed for the authentication via OIDC (which in my case is Keycloak)gray-telephone-67568
10/10/2022, 12:29 PMred-analyst-79902
10/10/2022, 2:11 PM'failures': {'tableau-login': ["Unable to LoginReason: HTTPSConnectionPool(host='172.22.5.19', port=443): Max retries exceeded with url: /api/2.4/serverInfo (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)')))"]},
Any experience with this?thankful-morning-85093
10/10/2022, 10:11 PMclever-garden-23538
10/10/2022, 10:16 PM22:12:21.273 [Thread-1167] ERROR c.l.d.g.a.service.AnalyticsService:264 - Search query failed: Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]
22:12:21.273 [Thread-1167] ERROR c.l.d.g.a.r.GetHighlightsResolver:35 - Failed to retrieve analytics highlights!
java.lang.RuntimeException: Search query failed:
at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.executeAndExtract(AnalyticsService.java:265)
at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.getHighlights(AnalyticsService.java:236)
at com.linkedin.datahub.graphql.analytics.resolver.GetHighlightsResolver.getHighlights(GetHighlightsResolver.java:58)
at com.linkedin.datahub.graphql.analytics.resolver.GetHighlightsResolver.get(GetHighlightsResolver.java:33)
at com.linkedin.datahub.graphql.analytics.resolver.GetHighlightsResolver.get(GetHighlightsResolver.java:24)
at graphql.execution.ExecutionStrategy.fetchField(ExecutionStrategy.java:270)
at graphql.execution.ExecutionStrategy.resolveFieldWithInfo(ExecutionStrategy.java:203)
at graphql.execution.AsyncExecutionStrategy.execute(AsyncExecutionStrategy.java:60)
at graphql.execution.Execution.executeOperation(Execution.java:165)
at graphql.execution.Execution.execute(Execution.java:104)
at graphql.GraphQL.execute(GraphQL.java:557)
at graphql.GraphQL.parseValidateAndExecute(GraphQL.java:482)
at graphql.GraphQL.executeAsync(GraphQL.java:446)
at graphql.GraphQL.execute(GraphQL.java:377)
at com.linkedin.datahub.graphql.GraphQLEngine.execute(GraphQLEngine.java:90)
at com.datahub.graphql.GraphQLController.lambda$postGraphQL$0(GraphQLController.java:94)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:187)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1869)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1626)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1069)
at com.linkedin.datahub.graphql.analytics.service.AnalyticsService.executeAndExtract(AnalyticsService.java:260)
... 17 common frames omitted
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [<http://compass-elasticsearch.us-west-2.prd.fa.tesla.services:80>], URI [/datahub_usage_event/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"datahub_usage_event","node":"QkcIA9AKTCGOho3ag0da_Q","reason":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory.","caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}},"status":400}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:302)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:272)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:246)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1613)
... 21 common frames omitted
Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407)
at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:437)
at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:603)
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:179)
... 24 common frames omitted
Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [browserId] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407)
at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:437)
... 28 common frames omitted
kind-scientist-44426
10/11/2022, 5:50 AMBroken DAG: [/app/airflow/airflow/dags/mis/dags/dag_generator/datahub_sample_lineage.py] Traceback (most recent call last):
File "pydantic/__init__.py", line 2, in init pydantic.__init__
File "pydantic/dataclasses.py", line 52, in init pydantic.dataclasses
ImportError: cannot import name dataclass_transform
Can someone help me with this error.witty-rain-85574
10/11/2022, 9:16 AMdatahub delete -p "snowflake" --entity_type dataset -a "datasetProfile"
. However, this ended up soft deleting all the entities itself instead of just the aspect. Can someone please help to explain why this behaviour was observed, and how I can go about deleting just the aspect values? Thanks! 🙂bumpy-pharmacist-66525
10/11/2022, 12:05 PMwhite-hydrogen-24531
10/11/2022, 2:23 PMfrom datahub.metadata.schema_classes import (
DomainsClass,
ChangeTypeClass
)
import datahub.emitter.mce_builder as builder
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
from datahub.ingestion import graph
graph = DataHubGraph(DatahubClientConfig(server = "<http://datahub-gms>"))
dataset_urn= builder.make_dataset_urn(platform="hive", name="test.test", env="PROD")
#new_domain = DomainsClass(domains=["TEST_123"])
new_domain = DomainsClass(["TEST_123"])
current_domain = graph.get_domain(entity_urn=dataset_urn)
print(current_domain)
event = MetadataChangeProposalWrapper(
entityType="dataset",
changeType = ChangeTypeClass.UPSERT,
entityUrn = dataset_urn,
aspectName="domains",
aspect=new_domain
)
graph.emit(event)
ripe-apple-36185
10/11/2022, 4:21 PMconvert_urns_to_lowercase: false
in the recipe).
great expectations is converting the URN components to lower case. Is there a way to have DataHubValidationAction set the URNs to uppercase?ripe-tailor-61058
10/11/2022, 7:26 PMripe-tailor-61058
10/11/2022, 7:27 PMlimited-forest-73733
10/12/2022, 6:02 AMglamorous-wire-83850
10/12/2022, 8:09 AMextraEnvs:
- name: AUTH_JAAS_ENABLED
value: "true"
- name: JAVA_OPTS
value: |-
-Djava.security.auth.login.config=/datahub-frontend/conf/custom/jaas.conf
extraVolumes:
- name: jaas-conf-volume
configMap:
name: jaas-conf
extraVolumeMounts:
- name: jaas-conf-volume
mountPath: datahub-frontend/conf/custom/jaas.conf
subPath: jaas.conf
readOnly: true
2.the Jaas file:
WHZ-Authentication {
com.sun.security.auth.module.LdapLoginModule sufficient
userProvider="<ldap://server.com.tr:389/CN=test,OU=test2,OU=SERVICE> USERS,DC=infoshop,DC=com,DC=tr"
authIdentity="{USERNAME}"
java.naming.security.authentication="simple"
debug="true"
useSSL="true";
};
shy-parrot-64120
10/12/2022, 2:19 PMfast-ice-59096
10/12/2022, 3:05 PMfast-ice-59096
10/12/2022, 3:05 PMfast-ice-59096
10/12/2022, 3:05 PMbland-orange-13353
10/12/2022, 3:12 PMancient-library-85500
10/12/2022, 8:23 PMdatahub put --urn "urn:li:process:(PRC-1,Test_Process_1_Description)" --aspect testProcessProperties --aspect-data prc1.json
datahub get --urn "urn:li:process:(PRC-1,Test_Process_1_Description)"
The put command completes without any errors, but running the get command will produce the following error
19:23:22.102 [qtp522764626-22] INFO c.l.m.filter.RestliLoggingFilter:55 - GET /entitiesV2/urn%3Ali%3Aprocess%3A%28PRC-1%2CTest_Process_1_Description%29 - get - 500 - 1ms
19:23:22.105 [qtp522764626-22] ERROR c.l.m.filter.RestliLoggingFilter:38 - <http://Rest.li|Rest.li> error:
com.linkedin.restli.server.RestLiServiceException: java.lang.RuntimeException: Failed to get entity with urn: urn:li:process:(PRC-1,Test_Process_1_Description), aspects: null
Caused by: java.lang.RuntimeException: Failed to get entity with urn: urn:li:process:(PRC-1,Test_Process_1_Description), aspects: null
... 88 common frames omitted
Caused by: java.lang.NullPointerException: null
... 89 common frames omitted
Any help or insight would be greatly appreciated!
@kind-dawn-17532 @bland-balloon-48379 @nice-oil-28310clever-garden-23538
10/12/2022, 9:52 PMclever-garden-23538
10/13/2022, 12:44 AMbrave-secretary-27487
10/13/2022, 7:53 AMbigquery-beta
plugin.
But I get the error that the config options for lineage_parse_view_ddl
and lineage_use_sql_parser
don't exist as a config option. Are there any other options to visualize lineage between views in bigquery?