nav-img
Advanced

What Should I Do If a Storage Volume Cannot Be Mounted or the Mounting Times Out?

Fault Locating

Abnormal EVS Storage Volume Mounting

Symptom

Possible Cause

Solution

Mounting an EVS volume to a StatefulSet times out.

The node and the volume are in different AZs, causing a timeout during the mounting process and preventing the volume from being mounted to the workload.

Create a volume in the same AZ as the node and mount the volume to the node.

A pod fails to be created, and an event similar to the following is displayed, indicating that the volume fails to be mounted to the pod is reported.

Multi-Attach error for volume "pvc-62a7a7d9-9dc8-42a2-8366-0f5ef9db5b60" Volume is already used by pod(s) testttt-7b774658cb-lc98h

The number of pods of the Deployment that uses an EVS volume is greater than 1.

If the Deployment uses an EVS volume, there can only be one Deployment pod. If you specify more than two pods for the Deployment, it will still be created. However, if these pods are scheduled on different nodes, some of them will fail to start because the EVS volume they rely on cannot be mounted to those nodes.

Set the number of pods of the Deployment that uses an EVS volume to 1 or use other types of volumes.

A pod fails to be created, and information similar to the following is displayed:

MountVolume.MountDevice failed for volume "pvc-08178474-c58c-4820-a828-14437d46ba6f" : rpc error: code = Internal desc = [09060def-afd0-11ec-9664-fa163eef47d0] /dev/sda has file system, but it is detected to be damaged

The disk file system has been corrupted.

Back up the disk in EVS and restore the file system:

fsck -y {drive letter}

Abnormal SFS Turbo Storage Volume Mounting

Symptom

Possible Cause

Solution

  • In a common container scenario, the pod is in the Processing state, and the pod events include the following information.
    MountVolume.SetUp failed for volume {pv name}...
  • In a secure container scenario, the pod is in the Abnormal state, and the pod events include the following information:
    mount {SFS-Turbo-shared-address} to xxx failed
  1. The shared address in the PV is incorrect.
  2. The network connection between the node where the pod runs and the SFS Turbo file system to be mounted is disconnected.
  1. Check whether the shared address in the PV is correct.

    Obtain the YAML file of the PV and check the value of the everest.io/share-export-location field in spec.csi.volumeAttributes. (The correct shared address is the share path of the specified SFS Turbo file system.)

    kubectl get pv {pv name} -ojsonpath='{.spec.csi.volumeAttributes.everest\.io\/share-export-location}{"\n"}'

    If a sub-path is specified, it must be a valid existing subdirectory in the correct format, for example, 192.168.135.24:/a/b/c.

  2. Verify the network connectivity between the node where the pod runs and the SFS Turbo file system to be mounted.

    Check whether the SFS Turbo file system can be mounted to a workload:

    mount -t nfs -o vers=3,nolock,noresvport {SFS-Turbo-shared-address} /tmp

Storage Volume Mounting Timed Out

If the volume to be mounted stores too much data and involves permission-related configurations, the file permissions need to be modified one by one, which results in mounting timeout.

Fault locating

  • Check whether the securityContext field contains runAsuser and fsGroup. securityContext is a Kubernetes field that defines the permission and access control settings of pods or containers.
  • Check whether the startup commands contain commands used to obtain or modify file permissions, such as ls, chmod, and chown.

Solution

Determine whether to modify the settings based on your service requirements.