Euan Blackledge
02/10/2025, 3:14 PMkapa.ai
02/10/2025, 3:15 PMyaml apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1> kind: Ingress metadata: annotations: <http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout>: "120"
This increases the timeout to 120 seconds. You can adjust this value as needed. 2. Adjust global timeouts: According to [this GitHub discussion](https://github.com/airbytehq/airbyte/discussions/45946), you can increase timeouts by modifying the values.yaml
file: yaml global: env_vars: HTTP_IDLE_TIMEOUT: 1800s webapp: ingress: enabled: true annotations: <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal <http://nginx.ingress.kubernetes.io/proxy-body-size|nginx.ingress.kubernetes.io/proxy-body-size>: 16m <http://nginx.ingress.kubernetes.io/proxy-send-timeout|nginx.ingress.kubernetes.io/proxy-send-timeout>: 1800 <http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout>: 1800
3. For CDC (Change Data Capture) connections: If you're using CDC, you can adjust the initial waiting time. As mentioned in the [Postgres troubleshooting documentation](https://docs.airbyte.com/integrations/sources/postgres/postgres-troubleshooting#advanced-setting-up-initial-cdc-waiting-time), the default wait time is 5 minutes (300 seconds), but you can increase it up to 1200 seconds if needed. Remember that while increasing timeouts can help resolve the immediate issue, it's also important to investigate why your pods are taking so long to initialize. This could be due to resource constraints, network issues, or other factors that might need addressing for optimal performance.Euan Blackledge
02/10/2025, 3:19 PMFeb 10, 2025 @ 15:08:17.987 2025-02-10 15:08:17,985 [main] [34mINFO[0;39m i.a.c.ConnectorWatcher(getConnectorOutputStream):153 - Output file jobOutput.json found
Feb 10, 2025 @ 15:08:17.985 2025-02-10 15:08:17,984 [main] [34mINFO[0;39m i.a.c.ConnectorWatcher(processConnectorOutput):116 - Connector exited, processing output
Feb 10, 2025 @ 15:05:49.598 2025-02-10 15:05:49,597 [pool-4-thread-1] [34mINFO[0;39m i.a.c.i.LineGobbler(voidCall):166 - ----- START CHECK -----
Feb 10, 2025 @ 15:05:49.598 2025-02-10 15:05:49,598 [pool-4-thread-1] [34mINFO[0;39m i.a.c.i.LineGobbler(voidCall):166 -
Feb 10, 2025 @ 15:05:49.375 2025-02-10 15:05:49,374 [pool-4-thread-1] [34mINFO[0;39m i.a.c.i.LineGobbler(voidCall):166 -
Feb 10, 2025 @ 15:05:42.912 2025-02-10 15:05:42,907 [main] [34mINFO[0;39m i.m.c.e.DefaultEnvironment(<init>):168 - Established active environments: [worker-v2, control-plane, oss]
kapa.ai
02/10/2025, 3:19 PMkapa.ai
02/10/2025, 3:19 PMyaml server: extraEnvs: - name: HTTP_IDLE_TIMEOUT value: 20m - name: READ_TIMEOUT value: 30m
This sets the idle timeout to 20 minutes and the read timeout to 30 minutes, which should be more than enough for your 3-minute check. 2. Adjust the ingress proxy timeout: If you're using Kubernetes with an ingress, you can modify the ingress configuration to increase the timeout. As described in [this GitHub issue](https://github.com/airbytehq/airbyte/issues/45156), you can add the following annotation to your ingress definition: yaml apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1> kind: Ingress metadata: annotations: <http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout>: "300"
This sets the proxy read timeout to 300 seconds (5 minutes), which should cover your 3-minute check duration. Remember to restart the relevant Airbyte services after making these changes for them to take effect. It's also worth noting that while these changes will allow the check to complete without timing out, a 3-minute check duration is quite long. You might want to investigate why the check is taking so long and see if there are any optimizations you can make on the source side to reduce this time.Euan Blackledge
02/10/2025, 3:22 PMkapa.ai
02/10/2025, 3:22 PMkapa.ai
02/10/2025, 3:23 PM6m54s Normal Pulled Pod/airbyte-abctl-workload-launcher-6cf787c454-8bzzc Successfully pulled image "airbyte/workload-launcher:0.64.3" in 53.619s (4m33.783s including waiting)
This suggests that image pulling could be one factor in your 3-minute startup time. 2. Another [GitHub issue](https://github.com/airbytehq/airbyte/issues/44443) reported pod creation failing with timeouts. The logs showed that the Kubernetes client was experiencing timeouts when trying to create pods: Caused by: java.io.IOException: timeout at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:504)
This indicates that there might be underlying Kubernetes issues causing delays in pod creation. 3. In a [different GitHub issue](https://github.com/airbytehq/airbyte/issues/35501), there were reports of workloads taking a long time to start, with logs showing significant time gaps between different stages of the startup process: 2024-02-21 17:37:18 platform > Creating workload d40d3c71-53bc-4abe-bfb6-5b9c1344c345_8629198_4_sync 2024-02-21 17:37:58 INFO i.a.w.l.c.WorkloadApiClient(updateStatusToLaunched):54 - Attempting to update workload: d40d3c71-53bc-4abe-bfb6-5b9c1344c345_8629198_4_sync to LAUNCHED.
This 40-second gap between creation and launch could contribute to longer overall startup times. To investigate why your pods are taking 3 minutes to start, you might want to: 1. Check if image pulling is taking a long time. Consider using a local registry or pre-pulling images to reduce this time. 2. Investigate if there are resource constraints (CPU, memory) in your Kubernetes cluster that might be delaying pod scheduling and startup. 3. Look for any network issues that might be slowing down communication between Kubernetes components. 4. Examine the logs in more detail to see if there are specific stages of the startup process that are taking longer than expected. If you need more specific guidance, it would be helpful to see more detailed logs of the entire startup process for your workload pods.