Hello everyone. I have a question about flink chec...
# troubleshooting
l
Hello everyone. I have a question about flink checkpointing on gcs. Recently we switched our flink app to use google cloud storage to save checkpoints and behaviour changed. Application requires more memory to store checkpoints to gcs. So if we increase memory the checkpoint is saved, otherwise OOM is thrown(on AWS 3 gb is enough, on GCP 5 gb). Also I read about the issues with gcs hadoop client and provided the following configs
Copy code
fs.gs.outputstream.upload.buffer.size: "2097152"
fs.gs.outputstream.upload.chunk.size: "2097152"
fs.gs.outputstream.upload.max.active.requests: "5"
gs.writer.chunk.size: "2097152"
fs.gs.outputstream.direct.upload.enable: "true"
But I don't really see the correlation between memory and these params ( I see them in job manager config though). Do you have any insights on the topic? Anything else I can alter or do to affect memory consumption?
s
It would be great to have some insights from the team. I have noticed the same OOM when I moved my application from S3 to GCS with the same memory.