Hi team - the standard helm chart (which is great ...
# all-things-deployment
w
Hi team - the standard helm chart (which is great btw) includes ingress for the rest endpoint and web frontend, but not kafka. I'm getting some throughput issues with the rest endpoint for ingestion and so wanted to try using kafka as the sink for ingestion. I tried setting up ingress for Kafka but rapidly got to a stage where I couldn't get things to work. Does anyone have a working setup where the kafka ingestion endpoint is available for use while deployed in a kubernetes cluster? If you do - how did you set things up and how well does it work?
b
Hi Alan! I'm not aware of folks doing this yet, but that's an exciting thing - you're breaking ground! We are happy to provide support / answer questions as required. Typically I've seen Kafka Rest Proxy used in cases where one needs to front a kafka cluster with a more accessible rest interface
w
Thanks @big-carpet-38439 - if I were to do that, does the kafka sink support that kind of setup? If it does do you have an outline of the configuration you'd expect as part of a recipe to make that work? I'm not expecting everything to work first time, but if you've got an idea of a good place to start that would make any trial-and-error a little more efficient. I'm still new to the integration setup so lots I might get wrong as an assumption
b
Unfortunately I think we'd need to build a new Ingestion sink to point towards a proxy - this is doable but we don't currently have it prioritized
w
understood.
b
Have you attempted to scale out your existing instance? What type of volume do you have coming at the service?
w
The tableau source is pushing the rest endpoint hard enough that one or the other either maxes out it's memory or times out.
We have a large tableau instance, and it's not a stateful sync so we're firing a lot of data at the rest api.
I take your point that horizontal scaling of the rest endpoint might be wise though. Would I just scale the GMS container?
b
Yeah typically we recommend this: • Scale out GMS pods (say to 3) • Extract the Metadata Change Consumer Job (standalone consumers) into a separate pod (possible easily via the Helm charts)
w
I'm not sure I know what you mean by the latter. Have you got a link to the docs (or code) for the thing you're talking about?
So in helm - you can simply flip this value, and it should deploy a standalone consumer pod
Instead of as part of GMS
w
ahhh - yes I think I've effecitely already done that by offloading the ingestion tasks off to airflow
b
This will deploy the consumer of message which keeps our indexes updated
Somewhat tangential to ingestion
w
oh I get you 👍
@big-carpet-38439 - do I understand correctly that you mean this value in the helm chart: https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml#L41 the one for
datahub-ingestion-cron
b
the one for standalone consumers enabled
l101
w
gotcha
b
this will effectively split the consumer we need in datahub into separate k8s deployment
w
I'll enable that and see if it helps. Otherwise I'll focus on the tableau source. Thanks for being so responsive!
b
Okay wonderful! Definitely want to make sure you can scale this thing - we've actually had to this ourselves quite a bit so we can find some more pointers if you need