Hi Airbyte team. Newbie here. I have been struggli...
# give-feedback
a
Hi Airbyte team. Newbie here. I have been struggling a lot to set up OSS airbyte on k8s using the official helm chart: docs seem outdated and most of them focus on the cloud product, devoting little to no explanation to the OSS variants. I know airbyte is still in its early stages, but focusing on docs would be great. Shout out to all maintainers who devote their time to this project πŸ™‡
u
Can you confirm you’re following https://docs.airbyte.com/deploying-airbyte/ instructions @Aldo Orozco?
a
I am!
To add more color to my comment: the basic setup works fine. However, when I try to set up things like auth, or an ingress that connects to a Cloud DNS (GCP) record, nothing is mentioned anywhere and I end up looking at the helm chart's values.yaml to decipher which fields might help
j
I have spent a lot of time with ingress in Airbyte OSS in Kubernetes and basically that is out of scope of what Airbyte helm chart does, so that is probably why the documentation there is light. You should disable ingress in the helm chart and wrap their chart in your own ingress resources.
We wrap Airbyte's helm chart in our own chart so we can do whatever we want with ingress (and secrets injection)
βœ… 1
j
I agree that Joey's solution of wrapping the chart this is probably the cleanest option (or using Terraform for the infra side and Helm for the deployment). We use a mostly stock chart for our GKE deployment, but it requires some pretty hacky
annotations
values to generate a native HTTP/S LB, enable Cloud IAP (which we're using for auth), and do things like change the default timeouts/certificates/static IP assignment by generating your own FrontendConfig and BackendConfig and injecting them. (I mostly wanted to see if it was possible) The reality is that it's a fair amount of work to get it right, and tends to be more fragile as Google and Airbyte both change things over time. I'm not sure how much of that is because of our specific GCP environment which has a lot of intersecting components . . . GKE Autopilot (private cluster), Shared VPC from host project, IAP for auth, Cloud NAT for stable outbound IP, Google-generated cert, customized LB timeout, Secrets Manager, Cloud SQL (also private), etc. Happy to compare notes, but just flagging there are a lot of possible combinations depending on your setup. πŸ™‚ I do still think it would be useful to have at least a basic GKE native LB ingress case supported in the docs, even if it doesn't cover 100% of the corner cases. And then maybe it can be expanded upon over time. And probably the same for EKS/ELB. I imagine those are the platforms most people are going to reach for, so a good quickstart config to build on top of would be nice.
πŸ™ 1
nod 1
j
Yup, you can also just not manage ingress per application and use shared ingress resources you manage centrally with terraform. We do a little of both. But we are moving to the pattern of applications not being deployed with their own ingress, but managing ingress as core infrastructure
j
@Joey Benamy Yeah, I vacillate between application deployments being "self-contained" versus thinking more in terms of global infrastructure. When we moved to Shared VPC, I shifted more into the infra camp just because there are so many things that app-specific projects shouldn't necessarily have access to or have provisioned in-project. But then sometimes Google is funny and only allows certain resources to connect in-project, throwing a wrench in the whole thing. So I guess I'll have to keep compromising with myself πŸ˜‚
j
Yup, the constant battle I face in devops
πŸ’― 1