Airbyte deployed to K8s error, seems like it’s get...
# ask-community-for-troubleshooting
d
Airbyte deployed to K8s error, seems like it’s getting a container init error for the normalization pod. Log line before it times out:
Copy code
Log4j2Appender says: Attempting to start pod = normalization-normalize-541-0-pelpm for airbyte/normalization:0.2.23 with resources io.airbyte.config.ResourceRequirements@fa284d9[cpuRequest=,cpuLimit=,memoryRequest=,memoryLimit=]
When I
kubectl get pods
, I see that the above pod is getting an
Init:Error
The init container in describing that pod
Copy code
State:          Terminated
      Reason:       Error
      Exit Code:    1
Not sure where I should go from here as far as investigating the normalization pods that get stuck initializing 🤷 I rolled back the Helm chart to
0.40.35
, but still erroring out.
u
Hey Dusty! I'll look into this more and probably ask for feedback from my team, but is there an istio/other side container running along with the bootloader container in the bootloader pod in your setup?
d
Not 100% sure what you’re asking, but it sounds like you’re asking if there are other containers in the airbyte bootloader pod, but it looks like the only container listed when I describe that pod is
airbyte-bootloader-container
I am seeing this error in the Temporal pod:
Copy code
{"level":"error","ts":"2022-11-07T13:44:22.795Z","msg":"Internal service error","service":"history","error":"consistent query buffer is full, cannot accept new consistent queries","wf-id":"connection_manager_c08fa6c4-d428-46a1-a43d-4534ef15bf0b","wf-namespace-id":"74c3b8aa-5a76-4af8-a777-b378f19bf995","logging-call-at":"handler.go:1788","stacktrace":"<http://go.temporal.io/server/common/log/loggerimpl.(*loggerImpl)|go.temporal.io/server/common/log/loggerimpl.(*loggerImpl)>
Copy code
io.airbyte.workers.exception.WorkerException: Timed out waiting for [300000] milliseconds for [Pod] with name:[source-postgres-read-547-0-gzpwy] in namespace [data].
Anything else I can do to troubleshoot? Not seeing anything helpful in the logs 😞
u
Hey Dusty, sorry for the wait - we've been working through a large amount of messages due to Hacktoberfest! I'll get feedback on this from the team today, but our K8s expert is out till next week. If we can't figure this out today, I'll ask them on Monday!
d
No worries Nataly, it seems to be an intermittent issue at this point.
s
@Dusty Shapiro were you able to get any workaround for error:
Copy code
consistent query buffer is full, cannot accept new consistent queries
I am unable to trigger any sync due to this error on airbyte.