Using an Existing SFS File System Through a Static PV
SFS is a network-attached storage (NAS) that provides shared, scalable, and high-performance file storage. It applies to large-capacity expansion and cost-sensitive services. This section describes how to use an existing SFS file system to statically create PVs and PVCs for data persistence and sharing in workloads.
Prerequisites
- You have created a cluster and installed the CCE Container Storage (Everest) add-on in the cluster.
- To create a cluster using commands, ensure kubectl is used. For details, see Accessing a Cluster Using kubectl.
- You have created an SFS file system that is in the same VPC as the cluster.
Notes and Constraints
- Multiple PVs can use the same SFS or SFS Turbo file system with the following restrictions:
- Do not mount multiple PVCs or PVs that use the same underlying SFS or SFS Turbo volume to a single pod. Doing so will cause pod startup failures, as not all PVCs can be mounted due to identical volumeHandle value.
- The persistentVolumeReclaimPolicy parameter in the PVs must be set to Retain. Otherwise, when a PV is deleted, the associated underlying volume may be deleted. In this case, other PVs associated with the underlying volume malfunction.
- When the underlying volume is repeatedly used, enable isolation and protection for ReadWriteMany at the application layer to prevent data overwriting and loss.
Using an Existing SFS Capacity-Oriented File System Through kubectl
- Use kubectl to access the cluster.
- Create a PV.
- Create the pv-sfs.yaml file.
Example:
apiVersion: v1kind: PersistentVolumemetadata:annotations:pv.kubernetes.io/provisioned-by: everest-csi-provisionereverest.io/reclaim-policy: retain-volume-only # (Optional) The underlying volume is retained when the PV is deleted.name: pv-sfs # PV namespec:accessModes:- ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS.capacity:storage: 1Gi # SFS volume capacitycsi:driver: nas.csi.everest.io # Dependent storage driver for the mountingfsType: nfsvolumeHandle: <your_volume_id> # SFS Capacity-Oriented volume IDvolumeAttributes:everest.io/share-export-location: <your_location> # Shared path of the SFS volumestorage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisionerpersistentVolumeReclaimPolicy: Retain # Reclaim policystorageClassName: csi-nas # StorageClass name. csi-nas indicates that SFS Capacity-Oriented is used.mountOptions: [] # Mount optionsTable 1 Key parameters Parameter
Mandatory
Description
everest.io/reclaim-policy
No
Only retain-volume-only is supported.
This parameter is valid only when the Everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is retain-volume-only, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted.
volumeHandle
Yes
Volume ID if SFS Capacity-Oriented is used.
Log in to the CCE console, choose Service List > Storage > Scalable File Service, and select SFS Capacity-Oriented. In the list, click the name of the target SFS file system. On the details page, copy the content following ID.
everest.io/share-export-location
Yes
Shared path of the file system.
On the management console, choose Service List > Storage > Scalable File Service. You can obtain the shared path of the file system from the Mount Address column.
persistentVolumeReclaimPolicy
Yes
A reclaim policy is supported when the cluster version is or later than 1.19.10 and the Everest version is or later than 1.2.9.
The Delete and Retain reclaim policies are supported. For details, see PV Reclaim Policy. If multiple PVs use the same SFS volume, use Retain to prevent the underlying volume from being deleted with a PV.
Retain: When a PVC is deleted, both the PV and underlying storage resources will be retained. You need to manually delete these resources. After the PVC is deleted, the PV is in the Released state and cannot be bound to a PVC again.
Delete: When a PVC is deleted, its PV will also be deleted.
storage
Yes
Requested PVC capacity, in Gi. The value must be the same as that of the existing SFS Capacity-Oriented storage.
storageClassName
Yes
StorageClass name csi-nas, indicating that SFS 1.0 Capacity-Oriented is used for storage.
- Run the following command to create a PV:kubectl apply -f pv-sfs.yaml
- Create the pv-sfs.yaml file.
- Create a PVC.
- Create the pvc-sfs.yaml file.apiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc-sfsnamespace: defaultannotations:volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisionerspec:accessModes:- ReadWriteMany # The value must be ReadWriteMany for SFS.resources:requests:storage: 1Gi # SFS volume capacitystorageClassName: csi-nas # StorageClass name, which must be the same as that of the PVvolumeName: pv-sfs # PV name
Table 2 Key parameters Parameter
Mandatory
Description
storage
Yes
Requested capacity in the PVC, in Gi.
The value must be the same as the storage size of the existing PV.
storageClassName
Yes
StorageClass name csi-nas, which must be the same as the StorageClass of the PV specified in 1. This indicates that SFS 1.0 Capacity-Oriented is used for storage.
volumeName
Yes
PV name, which must be the same as the PV name in 1.
- Run the following command to create a PVC:kubectl apply -f pvc-sfs.yaml
- Create the pvc-sfs.yaml file.
- Create an application.
- Create a file named web-demo.yaml. In this example, the SFS volume is mounted to the /data path.apiVersion: apps/v1kind: Deploymentmetadata:name: web-demonamespace: defaultspec:replicas: 2selector:matchLabels:app: web-demotemplate:metadata:labels:app: web-demospec:containers:- name: container-1image: nginx:latestvolumeMounts:- name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes fieldmountPath: /data # Location where the storage volume is mountedimagePullSecrets:- name: default-secretvolumes:- name: pvc-sfs-volume # Volume name, which can be customizedpersistentVolumeClaim:claimName: pvc-sfs # Name of the created PVC
- Run the following command to create a workload to which the SFS volume is mounted:kubectl apply -f web-demo.yaml
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.
- Create a file named web-demo.yaml. In this example, the SFS volume is mounted to the /data path.
Verifying Data Persistence and Sharing
- View the deployed application and files.
- Run the following command to view the created pod:kubectl get pod | grep web-demo
Expected output:
web-demo-846b489584-mjhm9 1/1 Running 0 46sweb-demo-846b489584-wvv5s 1/1 Running 0 46s - Run the following commands in sequence to view the files in the /data path of the pods:kubectl exec web-demo-846b489584-mjhm9 -- ls /datakubectl exec web-demo-846b489584-wvv5s -- ls /data
If no result is returned for both pods, no file exists in the /data path.
- Run the following command to view the created pod:
- Run the following command to create a file named static in the /data path:kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static
- Run the following command to check the files in the /data path:kubectl exec web-demo-846b489584-mjhm9 -- ls /data
Expected output:
static - Verify data persistence.
- Run the following command to delete the pod named web-demo-846b489584-mjhm9:kubectl delete pod web-demo-846b489584-mjhm9
Expected output:
pod "web-demo-846b489584-mjhm9" deletedAfter the deletion, the Deployment controller automatically creates a replica.
- Run the following command to view the created pod:kubectl get pod | grep web-demo
The expected output is as follows, in which web-demo-846b489584-d4d4j is the newly created pod:
web-demo-846b489584-d4d4j 1/1 Running 0 110sweb-demo-846b489584-wvv5s 1/1 Running 0 7m50s - Run the following command to check whether the files in the /data path of the new pod have been modified:kubectl exec web-demo-846b489584-d4d4j -- ls /data
Expected output:
staticThe static file is retained, indicating that the data in the file system can be stored persistently.
- Run the following command to delete the pod named web-demo-846b489584-mjhm9:
- Verify data sharing.
- Run the following command to view the created pod:kubectl get pod | grep web-demo
Expected output:
web-demo-846b489584-d4d4j 1/1 Running 0 7mweb-demo-846b489584-wvv5s 1/1 Running 0 13m - Run the following command to create a file named share in the /data path of either pod: In this example, select the pod named web-demo-846b489584-d4d4j.kubectl exec web-demo-846b489584-d4d4j -- touch /data/share
Check the files in the /data path of the pod.
kubectl exec web-demo-846b489584-d4d4j -- ls /dataExpected output:
sharestatic - Check whether the share file exists in the /data path of another pod (web-demo-846b489584-wvv5s) as well to verify data sharing.kubectl exec web-demo-846b489584-wvv5s -- ls /data
Expected output:
sharestaticAfter you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.
- Run the following command to view the created pod:
Related Operations
You can also perform the operations listed in Table 3.
Operation | Description | Procedure |
---|---|---|
Viewing events | View event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. |
|
Viewing a YAML file | View, copy, or download the YAML file of a PVC or PV. |
|
- Prerequisites
- Notes and Constraints
- Using an Existing SFS Capacity-Oriented File System Through kubectl
- Verifying Data Persistence and Sharing
- Related Operations