Hi good day. I just finished a test ingestion and ...
# troubleshoot
s
Hi good day. I just finished a test ingestion and I would like to know where I can see the errors or failures. Based from the report failures is null but I am getting an error message.
Copy code
Sink (datahub-rest) report:
{'downstream_end_time': datetime.datetime(2022, 3, 16, 16, 7, 18, 188112),
 'downstream_start_time': datetime.datetime(2022, 3, 16, 16, 0, 51, 276126),
 'downstream_total_latency_in_seconds': 386.911986,
 'failures': [],
 'gms_version': 'v0.8.29',
 'records_written': 2495,
 'warnings': []}

Pipeline finished with failures
[2022-03-16 16:07:19,211] INFO     {datahub.telemetry.telemetry:159} - Sending Telemetry
[2022-03-16 16:07:29,226] INFO     {datahub.telemetry.telemetry:159} - Sending Telemetry
d
What error message did you get?
s
Copy code
Pipeline finished with failures
I wanted to know where this came from
d
Is this all the logs? It is weird it is saying failure even though in the summary there isn’t anything
s
Yeah which is why I am interested to see if there are any other log locations
d
It should have written some info logs/warning on the screen. You can enable debug logging as well with ->
datahub --debug …
s
@stale-petabyte-93467 There should be some logs above "Sink (datahub-rest) report". Can you please share those too?
Can you please share the following with error reports? This would help team in helping you out • Are you using UI based ingestion, or Python CLI through command line, or programatically via python SDK or programmatically via Java emitter? • full logs in text format (instead of screenshots) from the ingestion that fails. Please do not remove any parts of the log (mask the secret if any secret is being shown) • the recipe in text format (instead of screenshots)
s
The logs above are too long. This is why I am asking if it is stored somewhere?
I used Python CLI using datahub ingest -c
Running a VM with CENTOS 7.14
s
They are not stored anywhere at the moment
What recipe are you using?
If you are not comfortable sharing the logs you can search them yourself for any exceptions or errors
d
you can write to file the logs if you redirect the output to file:
Copy code
datahub ingest ... > datahub.log 2>&1
s
Let me pipe the output. I am using a recipe for snowflake. Let me rerun it tomorrow and update with logs
d
thanks