little-megabyte-1074
bumpy-furniture-4631
02/14/2022, 3:01 AMcareful-pilot-86309
02/14/2022, 12:11 PMlittle-megabyte-1074
prehistoric-room-17640
02/14/2022, 3:00 PMprehistoric-room-17640
02/14/2022, 3:02 PMquiet-kilobyte-82304
02/14/2022, 3:02 PMbumpy-furniture-4631
02/14/2022, 3:15 PMloud-island-88694
careful-pilot-86309
02/16/2022, 6:29 AMprehistoric-room-17640
03/09/2022, 3:18 AMlemon-terabyte-66903
03/10/2022, 4:11 PMdatabricks
platform in lineage? I would like to have a custom lineage with s3 datasets and databricks jobs.careful-pilot-86309
03/10/2022, 6:40 PMcareful-pilot-86309
03/10/2022, 6:42 PMmodern-belgium-81337
04/27/2022, 10:32 PMmaster databricks fs --overwrite datahub-spark-lineage*.jar dbfs:/datahub
Usage: databricks fs [OPTIONS] COMMAND [ARGS]...
Try 'databricks fs -h' for help.
Error: No such option: --overwrite Did you mean --version?
Hi, I’m trying to follow the doc here but it seems like the command hasn’t been updated?careful-pilot-86309
04/28/2022, 3:39 PMcareful-pilot-86309
04/28/2022, 3:43 PMcreamy-tent-10151
07/29/2022, 5:33 PMsilly-finland-62382
08/26/2022, 9:19 AMbumpy-furniture-4631
09/04/2022, 10:19 PMloud-island-88694
careful-action-61962
09/30/2022, 9:31 AMspark.databricks.sql.initial.catalog.name <unity catalog name>
add this in your spark cluster config and you're good to go.
Please make sure your user has select permission on tables.
If not, run this:
catalogs = spark.sql('show catalogs;');
for catalog in catalogs.toPandas()['catalog']:
if catalog in ['default', 'samples']:
continue
print(catalog)
use_catalog = f"USE CATALOG {catalog};"
print(use_catalog)
spark.sql(use_catalog);
show_db = f"SHOW DATABASES;"
print(show_db)
dbs = spark.sql(show_db);
for db in dbs.toPandas()['databaseName']:
spark.sql(f"grant usage on database {db} to `datahub`;")
if db in ['temp_notebooks', 'temp']:
continue
show_table = f"SHOW TABLES IN {db};"
tables = spark.sql(show_table);
for idx, row in tables.toPandas().iterrows():
table = row['database'] + "." + row['tableName']
grant_query = f'grant select on table {table} to `datahub`;'
print(grant_query)
spark.sql(grant_query);
numerous-yak-58823
10/03/2022, 2:38 PMhallowed-shampoo-52722
02/09/2023, 6:02 PMhallowed-shampoo-52722
02/13/2023, 9:27 PMfierce-animal-98957
04/25/2023, 5:47 AMfierce-animal-98957
05/02/2023, 4:25 PMgentle-arm-6777
06/29/2023, 3:58 PMbulky-shoe-65107
10/16/2023, 12:38 AMfew-piano-98292
03/06/2024, 4:28 PM