curious from the community here. I've deployed da...
# all-things-deployment
p
curious from the community here. I've deployed datahub into kubernetes and wrote ingestion recipes for redshift, kafka, s3, but what I want is to dynamically search our aws systems for data impacting systems (mysql, postgres, kafka, elastic search, etc...). Do most implementations create yaml files manually and save to GIT and then run updates through airflow, or is there an implementation pattern that I can follow that is typical of AWS or GCP.
o
Hey Dan! We don't have a way to do this currently, but this is a really interesting concept! A way to do this could be creating a source type that basically hits the AWS/GCP API and crawls for supported source types then routes into the other ingestion scripts. Implementations currently set their sources up manually.