<@U04S8PLJWMT> In a kubernetes deployment, with th...
# ask-ai
g
@kapa.ai In a kubernetes deployment, with the included minio storage, I am getting the error:
Copy code
Exception in thread "Thread-0" software.amazon.awssdk.services.s3.model.S3Exception: Storage backend has reached its minimum free drive threshold. Please delete a few objects to pr
oceed. (Service: S3, Status Code: 507, Request ID: 1821C75440FA3AFE, Extended Request ID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
        at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:38)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:74)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:43)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:79)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:41)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:55)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:39)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage2.executeRequest(RetryableStage2.java:93)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage2.execute(RetryableStage2.java:56)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage2.execute(RetryableStage2.java:36)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:53)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:35)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:82)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:43)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
        at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:210)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
        at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
        at software.amazon.awssdk.services.s3.DefaultS3Client.putObject(DefaultS3Client.java:10191)
        at io.airbyte.commons.storage.AbstractS3StorageClient.write(StorageClient.kt:438)
        at io.airbyte.commons.logging.logback.AirbyteCloudStorageAppender.upload(AirbyteCloudStorageAppender.kt:121)
        at io.airbyte.commons.logging.logback.AirbyteCloudStorageAppender.stop(AirbyteCloudStorageAppender.kt:106)
        at ch.qos.logback.core.sift.SiftingAppenderBase.stop(SiftingAppenderBase.java:102)
        at ch.qos.logback.core.spi.AppenderAttachableImpl.detachAndStopAllAppenders(AppenderAttachableImpl.java:107)
        at ch.qos.logback.classic.Logger.detachAndStopAllAppenders(Logger.java:209)
        at ch.qos.logback.classic.Logger.recursiveReset(Logger.java:333)
        at ch.qos.logback.classic.LoggerContext.reset(LoggerContext.java:382)
        at ch.qos.logback.classic.LoggerContext.stop(LoggerContext.java:359)
        at ch.qos.logback.core.hook.ShutdownHookBase.stop(ShutdownHookBase.java:40)
        at ch.qos.logback.core.hook.DefaultShutdownHook.run(DefaultShutdownHook.java:62)
        at io.airbyte.commons.logging.logback.AirbyteLogbackCustomConfigurer.registerShutdownHook$lambda$10(AirbyteLogbackCustomConfigurer.kt:250)
        at java.base/java.lang.Thread.run(Thread.java:1583)
How can I clean up the minio storage? Is it ok to delete and recreate the persistent volume? (If so, how would I do that?)
k
Based on the information provided in the knowledge sources, I can offer the following solution to clean up the Minio storage in your Kubernetes deployment of Airbyte: 1. The error you're encountering indicates that the Minio storage has reached its minimum free drive threshold. This is a common issue with long-lived Airbyte instances, as Minio is used to store logs. 2. To clean up the Minio storage, you can safely delete the log files. Here's how you can do it: a. First, access the Minio pod using kubectl:
Copy code
kubectl exec -it -n airbyte airbyte-minio-0 -- /bin/bash
b. Once inside the pod, navigate to the log storage directory and delete the files:
Copy code
cd /storage/airbyte-dev-logs/job-logging/workspace/    rm -rf *
You might also want to delete files in this directory:
Copy code
cd /storage/airbyte-dev-logs/job-logging/workspace/    rm -rf *
3. It's generally not necessary to delete and recreate the persistent volume. Deleting the log files should be sufficient to free up space. 4. If you do need to delete the persistent volume for any reason, be aware that this will delete all data stored by Minio. You would need to delete the PersistentVolumeClaim (PVC) and the pod. Kubernetes will then recreate the pod and a new PVC. Here's how you could do that:
Copy code
kubectl delete pvc <minio-pvc-name> -n airbyte    kubectl delete pod airbyte-minio-0 -n airbyte
Replace
<minio-pvc-name>
with the actual name of your Minio PVC. 5. After cleaning up, restart the Airbyte pods to ensure all changes take effect. Remember, regularly cleaning up old log files is a good practice to prevent this issue from recurring. You might want to consider setting up a periodic job to clean up old logs or increasing the storage capacity for Minio if this is a frequent issue. Sources: [Forum Post: S3 Minio Storage Full](https://discuss.airbyte.io/t/1857) [Forum Post: How to clean up airbyte-minio and what's stored in minio?](https://discuss.airbyte.io/t/3101) [Github Issue: [platform] airbyte-minio-0 pod restarts because of storage space is full](https://github.com/airbytehq/airbyte/issues/44556)