victorious-evening-88418
02/17/2023, 1:54 PMlively-dusk-19162
02/23/2023, 2:09 AMlively-dusk-19162
03/01/2023, 3:10 PMmost-animal-32096
03/06/2023, 5:06 PMdatahub-client
one, to actually try metadata emission, through REST and Kafka.
(NB: previously mentioned documentation misses the emitter.close()
and doesn't mention the required Gradle dependencies)numerous-scientist-83156
03/08/2023, 10:31 AMadslGen2
to adlsg2
, both name and id, and my class' local variable, it would work as expected, with breadcrumbs and all (first picture)
My coworker then mentioned that there are some predefined dataplatforms
that could be found in data_platform.json, here I noticed that the delimiter for adlsGen2
is /
instead of .
So just for fun I changed the platform back to use the predefined name adlsGen2
but added a line that replaces all the .
in the dataset urn with /
and this also works as expected..
I've then looked through the code a bit more and found that the function create_from_ids
, that's being used in the make_dataset_urn_with_platform_instance
function, i made to always use the .
as the delimiter in the name..
Is this working as intended?
Is there a another function I should be using to generate the dataset_urn when it's from an adlsGen2
thousands-printer-59538
03/15/2023, 10:31 AMwitty-butcher-82399
03/20/2023, 12:03 PMwonderful-jordan-36532
03/22/2023, 11:18 AMclean-scooter-32205
03/23/2023, 11:59 AMPERMISSION_DENIED
: Only account admin can list metastores
.
Is there a way to not require an account admin token? I would only be using a specific metastore id, and there’s no way the account admin would be ok on having an associated token lying around.brash-caravan-14114
03/28/2023, 3:18 PMKafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Java JAAS configuration is not supported, see <https://github.com/edenhill/librdkafka/wiki/Using-SASL-with-librdkafka> for more information."}
Is it possible to use datahub-actions and authenticate using IAM?
attaching executor.yaml
Thanks!witty-butcher-82399
03/28/2023, 4:17 PMAssertionError
is thrown when doing the _should_process
validation.
I have created a PR fixing this case: https://github.com/datahub-project/datahub/pull/7702proud-dusk-671
03/30/2023, 11:23 AMmodern-france-82371
04/05/2023, 10:26 AMbland-barista-59197
04/13/2023, 7:46 PMquiet-rain-16785
04/17/2023, 2:05 PMagreeable-table-54007
04/18/2023, 9:00 AMdamp-lighter-99739
04/18/2023, 2:28 PMwonderful-jordan-36532
04/24/2023, 6:53 AMquiet-television-68466
04/25/2023, 11:49 AMadorable-magazine-49274
04/25/2023, 12:01 PMbulky-lunch-41113
04/27/2023, 3:18 AMnumerous-byte-87938
05/01/2023, 5:29 PMfresh-dusk-60832
05/02/2023, 2:15 PMflaky-refrigerator-97518
05/09/2023, 2:51 AMbland-barista-59197
05/11/2023, 7:39 AMproject_on_behalf
other than scanning project e.g. bq-project-1.
2. Added two dataset in bq-project-1. one has partition key and other does not.
Solution:
https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/src/datahub/ingestion/source/ge_data_profiler.py#L923 should be something like this `bq_sql = f"SELECT * FROM {schema}
.`{table}`"`ripe-helmet-49084
05/31/2023, 9:47 AMfierce-agent-11572
05/31/2023, 2:12 PMastonishing-father-13229
05/31/2023, 3:58 PMhundreds-airline-29192
06/02/2023, 8:19 AMripe-helmet-49084
06/02/2023, 10:57 AM