Hello everybody!
We are having the next issue while trying to capture a spark job lineage:
when we read the table tmp.agus_test_4857 it captures it in datahub as a hdfs file instead of a hive table.
Since we've already ingested the hive table with its metadata, it appears as another object with all its metadata are we doing something incorrectly?
Thanks in advance!