``` [5:49 PM] 'Failed to connect to DataHub' due ...
# ingestion
a
Copy code
[5:49 PM] 'Failed to connect to DataHub' due to
[2022-09-30, 09:12:06 UTC] {{subprocess.py:89}} INFO - 			'HTTPSConnectionPool(host='<http://datahub-gms.amer-prod.xxx.com|datahub-gms.amer-prod.xxx.com>', port=8080): Max retries exceeded with url: /config (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f7342c77990>, 'Connection to <http://datahub-gms.amer-prod.xxx.com|datahub-gms.amer-prod.xxx.com> timed out. (connect timeout=30)'))'
d
This seems like datahub client is unable to connect to gms server. Connection timeout usually means there is no network connection. Please, can you try to get into the container where datahub client runs and try to curl to the gms server and check if it can access?
a
Ok Thanks
what do you mean by datahub client here ? @dazzling-judge-80093
Hi @dazzling-judge-80093 where will get this process?
d
I mean, either the datahub python CLI or managed ingestion is unable to connect. I don’t know which you use.
a
Python CLI we are using
d
Can you check from the machine you try to ingest if you can access datahub-gms.amer-prod.xxx.com on port
8080
as it seems like the host you are using is unable to connect to it
a
Tried it
It is working fine
our issue is we have ariflow dag job copied to the aws managed airflow service and from there our dag jobs are running as per schedulde
@dazzling-judge-80093