Hi Team, We have triggered 2 jobs where source is ...
# feedback-and-requests
n
Hi Team, We have triggered 2 jobs where source is GoogleAds(airbyte/source-google-ads:0.1.20) and destination is Snowflake(airbyte/destination-snowflake:0.3.14).All of a sudden airbyte server got crashed. We got an error saying "DEADLINE_EXCEEDED: deadline exceeded after 69.999905719s". We even saw the related issues on github and we are running on the suggested configuration. It used to work fine in Airbyte version: 0.29.22. But we are facing this issue after we have upgraded to 0.35.15. Airbyte Version: 0.35.15-alpha Server Details: Ubuntu 20.04.3 LTS RAM: 16 GB 4 Core CPU DiskSpace 200gb Im adding the job logs for better understanding:
Copy code
2022-02-07 09:24:27 WARN i.t.i.r.GrpcSyncRetryer(retry):56 - Retrying
after failure
io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded
after 69.999905719s. [closed=[],
open=[[remote_addr=airbyte-temporal/192.168.144.6:7233]]]
at
io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
~[grpc-stub-1.42.1.jar:1.42.1]
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
~[grpc-stub-1.42.1.jar:1.42.1]
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
~[grpc-stub-1.42.1.jar:1.42.1]
at
io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.getWorkflowExecutionHistory(WorkflowServiceGrpc.java:2642)
~[temporal-servicecl
ient-1.6.0.jar:?]
at
io.temporal.internal.client.WorkflowClientLongPollHelper.lambda$getInstanceCloseEvent$0(WorkflowClientLongPollHelper.java:143)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:61)
~[temporal-serviceclient-1.6.0.jar:?]
at
io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:51)
~[temporal-serviceclient-1.6.0.jar:?]
at
io.temporal.internal.client.WorkflowClientLongPollHelper.getInstanceCloseEvent(WorkflowClientLongPollHelper.java:131)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.client.WorkflowClientLongPollHelper.getWorkflowExecutionResult(WorkflowClientLongPollHelper.java:72)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.client.RootWorkflowClientInvoker.getResult(RootWorkflowClientInvoker.java:93)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.sync.WorkflowStubImpl.getResult(WorkflowStubImpl.java:243)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.sync.WorkflowStubImpl.getResult(WorkflowStubImpl.java:225)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.startWorkflow(WorkflowInvocationHandler.java:315)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.sync.WorkflowInvocationHandler$SyncWorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:270)
~[temporal-sdk-1.6.0.jar:?]
at
io.temporal.internal.sync.WorkflowInvocationHandler.invoke(WorkflowInvocationHandler.java:178)
~[temporal-sdk-1.6.0.jar:?]
at jdk.proxy2.$Proxy40.run(Unknown Source) ~[?:?]
at
io.airbyte.workers.temporal.TemporalClient.lambda$submitSync$3(TemporalClient.java:148)
~[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at
io.airbyte.workers.temporal.TemporalClient.execute(TemporalClient.java:439)
~[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at
io.airbyte.workers.temporal.TemporalClient.submitSync(TemporalClient.java:147)
~[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at
io.airbyte.workers.worker_run.TemporalWorkerRunFactory.lambda$createSupplier$0(TemporalWorkerRunFactory.java:83)
~[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at io.airbyte.workers.worker_run.WorkerRun.call(WorkerRun.java:51)
[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at io.airbyte.workers.worker_run.WorkerRun.call(WorkerRun.java:22)
[io.airbyte-airbyte-workers-0.35.15-alpha.jar:?]
at
io.airbyte.commons.concurrency.LifecycledCallable.execute(LifecycledCallable.java:94)
[io.airbyte-airbyte-commons-0.35.15-alpha.jar:?]
at
io.airbyte.commons.concurrency.LifecycledCallable.call(LifecycledCallable.java:78)
[io.airbyte-airbyte-commons-0.35.15-alpha.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
k
Looks to me like a resource issue do you mind trying this out by increasing resources and see if the error still happens
@Harshith (Airbyte) thanks for the response. We increased the resources but we are wondering why this is happening after the upgrade. Everything before the upgrade was working fine. Now the cost is increasing significantly because of this.
r
Hey these are a couple things that would make this much easier to dive into: 1. Having them run 
docker stats
 so we can see how much each container is using. 2. Also knowing how many connections they have in total and which connectors they are using.