<@U04S8PLJWMT> Do these logs look familiar? ```202...
# ask-ai
g
@kapa.ai Do these logs look familiar?
Copy code
2025-02-07 00:22:09,395 [Activity Executor taskQueue="CHECK_CONNECTION", namespace="default": 15]       WARN    i.t.i.w.ActivityWorker$TaskHandlerImpl(logExceptionDuringResultRepor
ting):417 - Failure during reporting of activity result to the server. ActivityId = 2c63a241-9b17-350d-8c17-c9e40c99a354, ActivityType = RunWithWorkload, WorkflowId=check_4409_sour
ce, WorkflowType=CheckConnectionWorkflow, RunId=a89e8c8e-7a17-4183-b09c-30cb0efe3666
io.grpc.StatusRuntimeException: NOT_FOUND: workflow execution already completed
        at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:268)
        at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:249)
        at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:167)
        at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.respondActivityTaskCompleted(WorkflowServiceGrpc.java:4447)
        at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.lambda$sendReply$0(ActivityWorker.java:353)
        at io.temporal.internal.retryer.GrpcRetryer.lambda$retry$0(GrpcRetryer.java:52)
        at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:69)
        at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:60)
        at io.temporal.internal.retryer.GrpcRetryer.retry(GrpcRetryer.java:50)
        at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.sendReply(ActivityWorker.java:348)
        at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:301)
        at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:254)
        at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:217)
        at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:93)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
k
Yes, these logs look familiar and are related to a known issue in Airbyte. The error message "NOT_FOUND: workflow execution already completed" is commonly seen in Airbyte logs when there's a timing issue between the workflow completion and activity reporting. This error typically occurs when an activity tries to report its result back to the Temporal server, but the workflow it belongs to has already completed. It's generally not a critical error and doesn't necessarily indicate a problem with the actual operation (in this case, a connection check). This type of error has been reported and discussed in several Airbyte issues and forum posts: 1. [Github Issue: Infra error for CheckConnectionWorkflow](https://github.com/airbytehq/airbyte/issues/26673) 2. [Forum Post: All connectors fails test without clear error](https://discuss.airbyte.io/t/3970) 3. [Github Issue: Source S3: Caused by: io.temporal.failure.TimeoutFailure](https://github.com/airbytehq/airbyte/issues/25955) In many cases, this error doesn't prevent the actual operation from completing successfully. However, if you're experiencing persistent issues or failures in your Airbyte operations, you might want to consider the following: 1. Check if your Airbyte instance has sufficient resources. Resource constraints can sometimes lead to timing issues. 2. Ensure that your Airbyte version is up-to-date, as some of these issues may have been addressed in newer versions. 3. If the problem persists, you might need to restart the Airbyte services, particularly the temporal service. If you continue to face issues, it would be helpful to gather more context about when and how often this error occurs, and whether it's affecting your Airbyte operations. With that information, you might want to open a new issue on the Airbyte GitHub repository or seek further assistance on the Airbyte forum.
g
@kapa.ai I also see a lot of
Unspecified task queue kind
message in the temporal logs; like this:
Copy code
{
  "level": "warn",
  "ts": "2025-02-07T00:28:14.756Z",
  "msg": "Unspecified task queue kind",
  "service": "frontend",
  "wf-task-queue-name": "GET_SPEC",
  "wf-namespace": "default",
  "logging-call-at": "workflow_handler.go:3772"
}
k
A message from kapa.ai
Yes, the "Unspecified task queue kind" warning message you're seeing in the Temporal logs is a known issue in Airbyte. This warning is related to how Airbyte interacts with Temporal, the workflow engine it uses for orchestrating various tasks. This specific warning has been reported and discussed in several Airbyte issues: 1. [Github Issue: [temporal] "Unspecified task queue kind" warning from Temporal pods](https://github.com/airbytehq/airbyte/issues/48428) 2. [Github Discussion: temporal is keeping printing Unspecified task queue msg](https://github.com/airbytehq/airbyte/discussions/38587) The warning appears to be generated for various task queues used by Airbyte, such as GET_SPEC, CHECK_CONNECTION, SYNC, DISCOVER_SCHEMA, and others. It's important to note that while these warnings are frequent, they don't necessarily indicate a critical problem with your Airbyte installation or operations. According to the discussion, this warning is likely caused by Java clients that don't set the queue kind properly when interacting with Temporal. It's a known issue that has been addressed in newer versions of Temporal. While these warnings don't typically affect the functionality of Airbyte, they can make it difficult to monitor relevant information in the logs due to their frequency. Some potential steps you could consider: 1. Check your Airbyte version and consider upgrading if you're not on the latest version, as newer versions might include fixes or improvements related to this issue. 2. If you're using a custom Helm chart deployment, you might want to check if updating the Temporal version in the Helm chart could help. As mentioned in one of the issues: "Maybe bumping Temporal version in Airbyte helm chart would fix this." 3. If these warnings are causing issues with your log monitoring, you might need to adjust your log filtering to exclude or de-prioritize these specific warnings. If these warnings are causing significant problems for you or if you need further assistance, you may want to consider opening a new issue on the Airbyte GitHub repository with details about your specific setup and the impact of these warnings on your operations.