calm-dinner-63735
05/28/2022, 8:34 PMbumpy-activity-74405
05/30/2022, 8:19 AMthe user does not have 'bigquery.readsessions.create' permission for ...
But this permission is not in the permission list in the docs. Is this just a case of documentation being out of date or am I doing something wrong?great-cpu-72376
05/30/2022, 9:23 AMreact-dom.production.min.js:101 Uncaught TypeError: Cannot read properties of undefined (reading 'writeText')
at onClick (EntityHeader.tsx:234:57)
at J (button.js:233:57)
at Object.qe (react-dom.production.min.js:52:317)
at Ye (react-dom.production.min.js:52:471)
at react-dom.production.min.js:53:35
at Tr (react-dom.production.min.js:100:68)
at xr (react-dom.production.min.js:101:380)
at react-dom.production.min.js:113:65
at je (react-dom.production.min.js:292:189)
at react-dom.production.min.js:50:57
What should I do to solve?wonderful-quill-11255
05/30/2022, 11:20 AMThe field at path '/listRecommendations/modules[2]/content[4]/entity/tool' was declared as a non null type, but the code involved in retrieving the data has wrongly returned a null value. The graphql specification requires that the parent field be set to null, or if that is non nullable that it bubble up null to its parent and so on. The non-nullable type is 'String' within parent type 'Dashboard' (code undefined)
The error message was quickly hidden again but the recommendations at the start page were all gone.
A seemingly correlated exception stacktrace in the gms logs says:
java.lang.NullPointerException: null
at com.linkedin.datahub.graphql.GmsGraphQLEngine.lambda$null$69(GmsGraphQLEngine.java:831)
at com.linkedin.datahub.graphql.resolvers.load.LoadableTypeResolver.get(LoadableTypeResolver.java:34)
at com.linkedin.datahub.graphql.resolvers.load.LoadableTypeResolver.get(LoadableTypeResolver.java:22)
at com.linkedin.datahub.graphql.resolvers.AuthenticatedResolver.get(AuthenticatedResolver.java:25)
at graphql.execution.ExecutionStrategy.fetchField(ExecutionStrategy.java:270)
at graphql.execution.ExecutionStrategy.resolveFieldWithInfo(ExecutionStrategy.java:203)
at graphql.execution.AsyncExecutionStrategy.execute(AsyncExecutionStrategy.java:60)
at graphql.execution.ExecutionStrategy.completeValueForObject(ExecutionStrategy.java:646)
at graphql.execution.ExecutionStrategy.completeValue(ExecutionStrategy.java:438)
at graphql.execution.ExecutionStrategy.completeField(ExecutionStrategy.java:390)
at graphql.execution.ExecutionStrategy.lambda$resolveFieldWithInfo$1(ExecutionStrategy.java:205)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.dataloader.DataLoaderHelper.lambda$dispatchQueueBatch$2(DataLoaderHelper.java:230)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
at java.lang.Thread.run(Thread.java:748)
The code around that line looks like (Still on the 0.8.26 tag)
private void configureDashboardResolvers(final RuntimeWiring.Builder builder) {
builder.type("Dashboard", typeWiring -> typeWiring
.dataFetcher("relationships", new AuthenticatedResolver<>(
new EntityRelationshipsResultResolver(graphClient)
))
.dataFetcher("platform", new AuthenticatedResolver<>(
new LoadableTypeResolver<>(dataPlatformType,
(env) -> ((Dashboard) env.getSource()).getPlatform().getUrn())) <--- Line 831
)
I noticed in the release notes for 0.8.25 that one can migrate entities from platform to platform instances but I interpret that as an optional feature that is not a required part of the upgrade (we also didn't see this behaviour in the nonproduction upgrade).
We rolled back the upgrade for now and are trying to reproduce the behaviour locally but without success yet. Perhaps someone here has some advice?great-cpu-72376
05/31/2022, 10:40 AMemitter_task = DatahubEmitterOperator(
task_id="emitter_task",
datahub_conn_id="datahub_rest_default",
mces=[
builder.make_lineage_mce(
upstream_urns=[builder.make_dataset_urn(platform="file", name=filename) for filename in "{{ ti.xcom_pull(task_ids='test_operator_task', key='file_list') }}"],
downstream_urn=builder.make_dataset_urn(platform="file", name="/nfs/data/archival/archive_test.tar.gz")
)
]
)
the problem is that the template {{ ti.xcom_pull(task_ids='test_operator_task', key='file_list') }}
is not evaluated, how can I get the xcom pushed by another task?most-solstice-19338
05/31/2022, 11:55 AM./docker/datahub-upgrade/datahub-upgrade.sh -u RestoreIndices
I get the following error:
(see in thread).chilly-elephant-51826
05/31/2022, 5:27 PMxpack.security.enabled=true
, it throws error as below
it seems password is not been able to propagate properly
Caused by:
org.elasticsearch.client.ResponseException: method [HEAD], host [<http://elasticsearch:9200>], URI [/graph_service_v1?ignore_throttled=false&ignore_unavailable=false&expand_wildcards=open%2Cclosed&allow_no_indices=false], status line [HTTP/1.1 401 Unauthorized]
I have raised a bug, link, help is really appreciatednumerous-eve-42142
05/31/2022, 9:06 PMworried-painting-70907
05/31/2022, 10:28 PMbetter-spoon-77762
05/31/2022, 11:31 PMwonderful-quill-11255
06/01/2022, 6:30 AMbumpy-activity-74405
06/01/2022, 6:35 AMsquare-solstice-69079
06/01/2022, 10:17 AMworried-painting-70907
06/01/2022, 4:01 PMwonderful-smartphone-35332
06/01/2022, 5:49 PMhandsome-football-66174
06/01/2022, 8:22 PM20:19:44.193 [Thread-2896] ERROR c.datahub.graphql.GraphQLController:93 - Errors while executing graphQL query: "query getAnalyticsCharts {\n getAnalyticsCharts {\n groupId\n title\n charts {\n ...analyticsChart\n __typename\n }\n __typename\n }\n}\n\nfragment analyticsChart on AnalyticsChart {\n ... on TimeSeriesChart {\n title\n lines {\n name\n data {\n x\n y\n __typename\n }\n __typename\n }\n dateRange {\n start\n end\n __typename\n }\n interval\n __typename\n }\n ... on BarChart {\n title\n bars {\n name\n segments {\n label\n value\n __typename\n }\n __typename\n }\n __typename\n }\n ... on TableChart {\n title\n columns\n rows {\n values\n cells {\n value\n linkParams {\n searchParams {\n types\n query\n filters {\n field\n value\n __typename\n }\n __typename\n }\n entityProfileParams {\n urn\n type\n __typename\n }\n __typename\n }\n __typename\n }\n __typename\n }\n __typename\n }\n __typename\n}\n", result: {errors=[{message=An unknown error occurred., locations=[{line=2, column=3}], path=[getAnalyticsCharts], extensions={code=500, type=SERVER_ERROR, classification=DataFetchingException}}], data=null}, errors: [DataHubGraphQLError{path=[getAnalyticsCharts], code=SERVER_ERROR, locations=[SourceLocation{line=2, column=3}]}]
modern-belgium-81337
06/01/2022, 9:06 PMdatahub init
worked?
I forwarded my GMS pod to a local host address, and skipped the token part, and that was it, there was no confirmation. I was wondering if there’s a way to tell it’s a successful connection without trying a command like ingest
?salmon-rose-54694
06/02/2022, 7:08 AMsalmon-rose-54694
06/02/2022, 10:15 AMminiature-journalist-76345
06/02/2022, 3:17 PMgentle-diamond-98883
06/02/2022, 7:21 PMsupportsImpactAnalysis
flag mentioned here but I was wondering where/how do i set this flag to True? Thanks!chilly-elephant-51826
06/03/2022, 12:54 PMcalm-dinner-63735
06/03/2022, 12:58 PMmodern-laptop-12942
06/03/2022, 4:14 PMclean-lamp-36043
06/03/2022, 4:25 PMripe-alarm-85320
06/03/2022, 6:06 PMsome-microphone-33485
06/03/2022, 7:53 PMboundless-student-48844
06/04/2022, 6:29 AMMCL_CONSUMER_ENABLED
to true
in GMS instead of setting it in MAE Consumer (which alr has MAE_CONSUMER_ENABLED="true"
). And the entity change events are produced by GMS instead of MAE Consumer.
This doesn’t look aligned with the code I understand.
From the code, MetadataChangeLogProcessor
is enabled when either MAE_CONSUMER_ENABLED or MCL_CONSUMER_ENABLED is true - thus, this ChangeLog processor is alr enabled in MAE Consumer. As EntityChangeEventGeneratorHook
is one of the hooks invoked by MetadataChangeLogProcessor
. My understanding is that the entity change events should be produced by the standalone MAE Consumer (instead of GMS) without additional config of MCL_CONSUMER_ENABLED
on GMS.
Do I miss some gap here? 🙇 (running with DataHub v0.8.35, deployed with Helm chart 0.2.72)chilly-elephant-51826
06/04/2022, 12:00 PMdatahub-actions
run some ingestion over kafka stream, but since I have protected kafka, that does not use simple ssl connection instead uses sasl
, it is not passing the correct parameter required to connect.
This is the kafka config that I am using
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
client.sasl.mechanism: SCRAM-SHA-512
kafkastore.security.protocol: SSL
ssl.endpoint.identification.algorithm: https
ssl.keystore.type: JKS
ssl.protocol: TLSv1.2
ssl.truststore.type: JKS
even though they are passed correctly to the container env
variables but are not populated in the config file that gets executed
here is the config that was generated (found from container logs)
{'source':
{ 'type': 'datahub-stream',
'config': {
'auto_offset_reset': 'latest',
'connection': {
'bootstrap': 'XXXXXXXXXXXXXX',
'schema_registry_url': 'XXXXXXXXXXXXX',
'consumer_config': {'security.protocol': 'SASL_SSL'}
},
'actions': [
{ 'type': 'executor',
'config': {
'local_executor_enabled': True,
'remote_executor_enabled': 'False',
'remote_executor_type': 'acryl.executor.sqs.producer.sqs_producer.SqsRemoteExecutor',
'remote_executor_config': {
'id': 'remote',
'aws_access_key_id': '""',
'aws_secret_access_key': '""',
'aws_session_token': '""',
'aws_command_queue_url': '""',
'aws_region': '""'
}
}
}
],
'topic_routes': {
'mae': 'MetadataAuditEvent_v4',
'mcl': 'MetadataChangeLog_Versioned_v1'
}
}
},
'sink': {'type': 'console'},
'datahub_api': {'server': '<http://datahub-datahub-gms:8080>', 'extra_headers': {'Authorization': 'Basic __datahub_system:NOTPASSING'}}
}
as in the above config it is clear that other required configuration are not getting passed,
I have raised a bug, any help is appreciated.swift-breakfast-25077
06/05/2022, 5:12 PM- name: datahub_action
action:
module_name: datahub.integrations.great_expectations.action
class_name: DataHubValidationAction
server_url: <http://localhost:8080> #datahub server url
Getting this message when checkpoint runs :