Hi Team, we have two k8s environment openshift and...
# troubleshooting
k
Hi Team, we have two k8s environment openshift and azure k8s, Job manager HA is configured and working fine in openshift environment( PVC access mode is readwritemany) But in Azure k8s Job manager is in pending state (PVC is bounded successfully) below is the event from the pending pod
Copy code
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    2m22s                default-scheduler  Successfully assigned flink/kmgjobmanager-64d7777b9c-5b8bw to aks-nodepool1-12868363-vmss00001c
  Warning  FailedMount  38s (x7 over 2m12s)  kubelet            MountVolume.MountDevice failed for volume "pvc-daa9e622-da57-4fbe-bc3b-8470e1b3eef8" : rpc error: code = Internal desc = volume(mc_cns-ba-mni-dev-westeurope-rg_e2e-common-qa-01_westeurope#f5bc870483c1d49f186a69c#pvc-daa9e622-da57-4fbe-bc3b-8470e1b3eef8###flink) mount //f5bc870483c1d49f186a69c.file.core.windows.net/pvc-daa9e622-da57-4fbe-bc3b-8470e1b3eef8 on /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/4ecfe2ad9d43dfe5ae52e1eb2bc8bbc22724aeaa08fff4469ecc4306c919fafa/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o mfsymlinks,actimeo=30,nosharesock,file_mode=0777,dir_mode=0777,<masked> //f5bc870483c1d49f186a69c.file.core.windows.net/pvc-daa9e622-da57-4fbe-bc3b-8470e1b3eef8 /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/4ecfe2ad9d43dfe5ae52e1eb2bc8bbc22724aeaa08fff4469ecc4306c919fafa/globalmount
Output: mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
  Warning  FailedMount  19s  kubelet  Unable to attach or mount volumes: unmounted volumes=[flink-data], unattached volumes=[flink-config-volume pod-template-volume external-libs kube-api-access-d7n2d logs flink-data]: timed out waiting for the condition
What could be the reason here