Hi all, I need help. Recently we have started usin...
# ask-community-for-troubleshooting
h
Hi all, I need help. Recently we have started using Airbyte to move the documents from different destinations, mostly like Gong and Google Drive, to destination S3 (in a JSONL format). Now we want to store the metadata of each file it transfers by calling a different internal service. By going through the documentation and talking to the Airbyte community bot, I understood that I need to make changes to the CDK and create a new destination altogether. I did the same and made the necessary changes. After making the changes, we built and created a new destination S3 image and then used the below command for our local. kind load docker-image airbyte/destination-s3:dev -n airbyte-abctl After all this from the local using the Airbyte UI, we created a connection between the Google Drive and the S3. Now when we sync, a new pod comes up with my new destination-S3 image and I can see the sync gets completed. We checked multiple pods, like server, worker and even the destination, but we are not seeing that our code is getting triggered. After doing a little more research, we build the new image for the CDK, but it's the same issue, We have reached out before as well and we have been stuck in this for a while now. Can someone please help us figure it out from here ? Cc: @Devesh Verma
k
Based on the knowledge sources, I can see you're working with custom connector development for Airbyte. Let me help you troubleshoot this issue. When developing custom connectors locally with
abctl
, there are a few important steps you need to follow after loading your custom image. According to the [connector development documentation](https://docs.airbyte.com/platform/contributing-to-airbyte/developing-locally#connector-contributions), after loading your image with
kind load docker-image
, you need to: 1. Update the connector version in the Airbyte UI: - Visit http://localhost:8000/ - Log in - Click Workspace settings > Destinations - Click the edit button for your S3 destination - Update the version number to use your docker image tag (default is
dev
) - Click
Change
to save the changes 2. Clear the connector specification cache: The Airbyte Server caches connector specifications for performance reasons. If you've updated your connector, you need to clear this cache: - Go back to Workspace settings > Destinations - Click the edit button for your S3 destination - Leave the version set to
dev
- Click
Change
to save the changes, which will refresh the dev connector's spec This cache clearing step is crucial and might be why your code changes aren't being triggered even though the pod is using your new image. Additionally, make sure you've verified that your image was built correctly by running:
Copy code
docker images ls | grep airbyte/destination-s3:dev
After following these steps, when you run a sync with that connector, it should use your local docker image with your custom code changes. If you're still experiencing issues after these steps, the knowledge sources don't contain specific troubleshooting information for this scenario, so you may need to check your connector logs more carefully or reach out to the Airbyte community with specific error messages.