I'm thinking instead of attempting the side-car to...
# feedback-and-requests
k
I'm thinking instead of attempting the side-car to deploy the
cloud-proxy
as it's own service perhaps?
u
yes that's probably the easiest way to do so
k
if the cloud proxy is deployed in the same namespace under a service, the worker pods should be able to access the proxy service via the kube service name
u
Related question - is there a place to configure the namespace for AirByte or will it deploy to the
default
namespace?
u
u
this is currently set to the default namespace
u
d'oh! right in front of my face. Thank you so much @Davin Chia (Airbyte)
u
you might need to also configure where jobs are launched - right now that is set to the default namespace
u
you can do so by adding a
KUBE_NAMESPACE
env var to the
.env
file
k
see this
u
Okay that I would have never found - thank you!
u
Of course. Lmk if you have any questions
u
Np! I'm super excited - been looking forward to setting this up for months
u
@Davin Chia (Airbyte) Are there recommended resource limits that you know of?
u
they should be sized according to your needs
u
in general at least 100GB of disk space and mid-sized nodes (4/8 cores) should perform ok
k
we haven't really pinned down our memory requirements yet
u
working on that as we roll out Cloud, so we appreciate any feedback there
u
Yeah glad to help! I'll let you know what config works for us and if we run into any issues. Thanks again!
u
Hi @Davin Chia (Airbyte) - first issue I've run across
u
Copy code
unable to recognize "stable-with-resource-limits/kustomization.yaml": no matches for kind "Kustomization" in version "<http://kustomize.config.k8s.io/v1beta1|kustomize.config.k8s.io/v1beta1>"
u
Oh - didn't read closely. Sorry
u
I'm not used to using
-k
m
however
airbyte-server
and
airbyte-scheduler
and
airbyte-db
are stuck in
pending
state after 3m
u
🤔 insufficient cpu - I will dig into this some more