curious from the community here. I've deployed datahub into kubernetes and wrote ingestion recipes for redshift, kafka, s3, but what I want is to dynamically search our aws systems for data impacting systems (mysql, postgres, kafka, elastic search, etc...). Do most implementations create yaml files manually and save to GIT and then run updates through airflow, or is there an implementation pattern that I can follow that is typical of AWS or GCP.