Hi all. I've installed airbyte via the helm chart,...
# kubernetes
g
Hi all. I've installed airbyte via the helm chart, only supplying ingress host+tls+annotation values, and whenever i setup the first (mysql) source I get a
FileNotFoundException: /root/.kube/config
error in the second worker log. I don't have admin control over the GKE cluster that I'm using, so is there any setting expectations I should ask ops to check for? (ie, maybe stuff for mounting api server details??)
👀 1
m
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [create]  for kind: [Pod]  with name: [source-mysql-sync-d9bb1063-90dd-4773-929c-af6b30043648-0-igsxl]  in namespace: [airbyte]  failed.
can you check Airbyte was created with the right permissions?
g
(I forgot to reply, derp me) This issue fixed itself, and I can only suspect a propagation delay somewhere. It was probably only ~3-5 minutes between installing and trying to make the new source a couple of times. I captured the log shown above and then signed off for the evening. The next morning I followed the stack trace to the fabric8 client getting an 'unauthorised' response from the create api call (and then trying to refresh the token by getting provider details from the non-existent kubeconfig). I double checked that the service account + role setup looked good (the one created by the chart), and then tried to setup the source again to see what information I missed the first couple of times. It worked perfectly .... 🙄 I then installed Airbyte in our production cluster (only changing the postgresql/externaldatabase values), waited ~10-15 minutes after the install, and setting up a new source worked the first time 🎉 Thanks for having a look @Marcos Marx (Airbyte), sorry it was just an impatient user. keanu thanks (Honestly though, this is the future, computers are fast, they should be ready when I'm ready 😛)
@Marcos Marx (Airbyte) Aaaand now it's back again ðŸĪŠ I wonder if it's this issue going on: https://github.com/fabric8io/kubernetes-client/pull/3445 After deleting the worker pod, and the deployment recreating it, it's now creating the pod for connection checking. It's possible I'm misremembering the time between install & source setup in the test environment ðŸ˜ģ
update: it's now a couple of hours later and still creating pods when i test ✅ i'll keep an eye on things and keep you updated; but for now i think it's just heisenbugging me
final update: i've been checking things regularly during the days and the problem has not returned. i did recreated (delete) the worker & podsweeper pods when the problem occurred the day after the initial install, but it has not occurred again. (i also recreated the temporal & webapp pods, but that did not fix the issue. i don't think they interact with kube api)
p
Hello, Just to let you know that I've been affected by the sema bug with the latest chart (v0.35.46-alpha) and indeed, manually restarting the pods after the helm deployment was complete fixed it
I'm suspecting a configmap not being available yet when the pod are starting
m
Hi, I just faced this issue too. Airbyte reported
Copy code
io.airbyte.workers.WorkerException: Normalization Failed.
.
.
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [delete]  for kind: [Pod]  with name: [null]  in namespace: [airbyte]  failed.
.
.
Suppressed: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get]  for kind: [Pod]  with name: [normalization-sync-22-0-mbsvh]  in namespace: [airbyte]  failed.
.
.
Caused by: java.io.FileNotFoundException: /root/.kube/config (No such file or directory)
But a connection scheduled for later worked fine. I'm worried this problem might return.
👀 1
a
guys so i am facing this issue now
i just installed a plain fresh install of airbyte using helm + argocd + digital ocean
as soon i try to add ANY connection im facing this error
could it be related to
f
@ahsen m I am facing same issue. I can deploy using helm. However I cant create any connection. Otherwise, when I deploy using kustomize, it works:
Copy code
kubectl apply -k kube/overlays/stable