<@U01MMSDJGC9> re: temporal issues. Do you have an...
# contributing-to-airbyte
j
@[DEPRECATED] Marcos Marx re: temporal issues. Do you have any examples of what you’ve been debugging or what you need to look at? From what I’ve seen, 99% of temporal errors we’ve seen have involved the underlying instance running out of space or other VM-level issues. Are you seeing something different where temporal itself is running into issues?
u
https://airbytehq.slack.com/archives/C01MFR03D5W/p1626892883126300?thread_ts=1626892009.125000&amp;cid=C01MFR03D5W I think @gunu is having some issues with temporal and having the instance unavailable
j
I don’t know of a single “deadline exceeded” problem where the machine wasn’t out of disk space/memory/or the postgres instance was under a lot of load.
u
the postgres instance of temporal or airbyte db?
u
the postgres db that temporal is using (which is typically the airbyte db)
u
if that’s overloaded we’ve seen problems with temporal, but usually it’s on the same machine so we’re still just running into resource constraints
u
yeah I imagine (and more often than not) it’s a resourcing issue. but isn’t the point of temporal/scheduling to schedule and avoid that?
u
keeping everything the same, would the k8s deployment not run into this specific issue?
m
this is on a single node though
u
(in most cases at least)
u
I think if disk space / memory are underallocated for the db pod or temporal pods on kubernetes we could run into a similar issue
u
in the near future temporal will help schedule across pods / nodes more easily, but this class of problems is more about the minimum resource requirements for temporal and its persistence (which is the airbyte postgres db)
u
Jared, maybe the takeaway here is we can do better with minimum resource specification for temporal on our Docker deploys
u
@Davin Chia (Airbyte) agreed both for Docker and Kube
l
Mostly in this thread I was curious if there were temporal-specific problems or debugging that we need to do outside of this class of issues.
u
Temporal is pretty robust so my sense is same as yours in that it's probably going to be our fault ha ha