Are there any compose files that show how to stand...
# all-things-deployment
a
Are there any compose files that show how to standup Data Hub without building it, specifically just using the images on Docker Hub? Is Kafka actually required if not actually using Kafka to inject metadata? Are both Elasticsearch and MySQL required? Trying to set up the bare minimum to test a custom Java emitter that uses a RestEmitter, not a Kafka one, and I’d also specifically like to be able to see the entity in the UI.
o
1. Yep! https://datahubproject.io/docs/quickstart & https://datahubproject.io/docs/docker/ 2. Yes, we use Kafka to record MetadataChangeLog events even when ingesting through the API. 3. Yes, we use ElasticSearch as our search index, and in most cases the graph as well, for returning results to the UI. You would not be able to use the UI without it. MySQL is our base storage component.
a
So this command
Copy code
datahub docker quickstart
works for me and is what I’ve been using, but my project has its own compose file that I’d like to just add the new services to. Is there anyway I could see the compose file that command uses? I don’t see a link to it on that page anywhere.
Hmm just went searching through the project’s GitHub and think I might have found what I’m looking for here: https://github.com/linkedin/datahub/blob/master/docker/quickstart/docker-compose.quickstart.yml
o
Ah my mistake, thought that second page included the quickstart file. Yes, that file and others in that directory are what you will want. There are a few different ones depending on if you want to use Neo4J for your graph and if you are running from an M1 architecture machine.
The files CLI uses can be found here: https://github.com/linkedin/datahub/blob/master/metadata-ingestion/src/datahub/cli/docker.py#L25-L45 the ones it uses depend on configuration options and system settings
a
Awesome! Thanks for the info!
👍 1