Configuring SFS Volume Mount Options

This section describes how to configure SFS mount options. You can configure mount options in a PV and bind the PV to a PVC. Alternatively, configure mount options in a StorageClass and use the StorageClass to create a PVC. In this way, PVs can be dynamically created and inherit mount options configured in the StorageClass by default.

Prerequisites

The CCE Container Storage (Everest) version must be 1.2.8 or later. This add-on identifies the mount options and transfers them to the underlying storage resources. The parameter settings take effect only if the underlying storage resources support the specified options.

Notes and Constraints

  • Mount options cannot be configured for Kata containers.

  • Due to the restrictions of the NFS protocol, if an SFS volume is mounted to a node for multiple times, link-related mounting parameters (such as timeo) take effect only when the SFS volume is mounted for the first time by default. For example, if the same SFS file system is mounted to multiple pods running on a node, the mounting parameter set later does not overwrite the existing parameter value. If you want to configure different mounting parameters in the preceding scenario, additionally configure the nosharecache parameter.

SFS Volume Mount Options

The Everest add-on in CCE presets the options described in Table 1 for mounting SFS volumes.

Table 1 SFS volume mount options

Parameter

Value

Description

keep-original-ownership

Blank

Whether to retain the ownership of the file mount point. If this option is used, the Everest add-on must be v1.2.63 or v2.1.2 or later.

  • By default, this option is not added, and the mount point ownership is root:root when SFS is mounted.

  • If this option is added, the original ownership of the file system is retained when SFS is mounted.

vers

3

File system version. Currently, only NFSv3 is supported. Value: 3

nolock

Blank

Whether to lock files on the server using the NLM protocol. If nolock is selected, the lock is valid for applications on one host. For applications on another host, the lock is invalid.

timeo

600

Waiting time before the NFS client retransmits a request. The unit is 0.1 seconds. Recommended value: 600

hard/soft

Blank

Mount mode.

  • hard: If the NFS request times out, the client keeps resending the request until the request is successful.

  • soft: If the NFS request times out, the client returns an error to the invoking program.

The default value is hard.

sharecache/nosharecache

Blank

How the data cache and attribute cache are shared when one file system is concurrently mounted to different clients. If this parameter is set to sharecache, the caches are shared between the mountings. If this parameter is set to nosharecache, the caches are not shared, and one cache is configured for each client mounting. The default value is sharecache.

Note

The nosharecache setting will affect the performance. The mounting information must be obtained for each mounting, which increases the communication overhead with the NFS server and the memory consumption of the NFS clients. In addition, the nosharecache setting on the NFS clients may lead to inconsistent caches. Determine whether to use nosharecache based on site requirements.

You can set other mount options if needed. For details, see Mounting an NFS File System to ECSs (Linux).

Configuring Mount Options in a PV

You can use the mountOptions field to configure mount options in a PV. The options you can configure in mountOptions are listed in SFS Volume Mount Options.

  1. Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.

  2. Configure mount options in a PV. Example:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        pv.kubernetes.io/provisioned-by: everest-csi-provisioner
        everest.io/reclaim-policy: retain-volume-only      # (Optional) The underlying volume is retained when the PV is deleted.
      name: pv-sfs
    spec:
      accessModes:
      - ReadWriteMany      # Access mode. The value must be ReadWriteMany for SFS.
      capacity:
        storage: 1Gi     # SFS volume capacity
      csi:
        driver: nas.csi.everest.io    # Dependent storage driver for the mounting
        fsType: nfs
        volumeHandle: <your_volume_id>   # ID of the SFS Capacity-Oriented volume
        volumeAttributes:
          everest.io/share-export-location: <your_location>  # Shared path of the SFS volume
          storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner
      persistentVolumeReclaimPolicy: Retain    # Reclaim policy
      storageClassName: csi-nas                # StorageClass name.
      mountOptions:                            # Mount options
      - vers=3
      - nolock
      - timeo=600
      - hard
    
  3. After a PV is created, you can create a PVC and bind it to the PV, and then mount the PV to the container in the workload. For details, see Using an Existing SFS File System Through a Static PV.

  4. Check whether the mount options take effect.

    In this example, the PVC is mounted to the workload that uses the nginx:latest image. You can run the mount -l command to check whether the mount options take effect.

    1. View the pod to which the SFS volume has been mounted. In this example, the workload name is web-sfs.

      kubectl get pod | grep web-sfs
      

      Command output:

      web-sfs-***   1/1     Running   0             23m
      
    2. Run the following command to check the mount options (web-sfs-*** is an example pod):

      kubectl exec -it web-sfs-*** -- mount -l | grep nfs
      

      If the mounting information in the command output is consistent with the configured mount options, the mount options have been configured.

      <Your shared path> on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=**.**.**.**,mountvers=3,mountport=2050,mountproto=tcp,local_lock=all,addr=**.**.**.**)
      

Configuring Mount Options in a StorageClass

You can use the mountOptions field to configure mount options in a StorageClass. The options you can configure in mountOptions are listed in SFS Volume Mount Options.

  1. Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.

  2. Create a custom StorageClass. Example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-sfs-mount-option
    provisioner: everest-csi-provisioner
    parameters:
      csi.storage.k8s.io/csi-driver-name: nas.csi.everest.io
      csi.storage.k8s.io/fstype: nfs
    everest.io/share-access-to: <your_vpc_id> # VPC ID of the cluster
     reclaimPolicy: Delete
    volumeBindingMode: Immediate
    mountOptions:                            # Mount options
    - vers=3
    - nolock
    - timeo=600
    - hard
    
  3. After the StorageClass is configured, you can use it to create a PVC. By default, the dynamically created PVs inherit the mount options configured in the StorageClass. For details, see Using an SFS File System Through a Dynamic PV.

  4. Check whether the mount options take effect.

    In this example, the PVC is mounted to the workload that uses the nginx:latest image. You can run the mount -l command to check whether the mount options take effect.

    1. View the pod to which the SFS volume has been mounted. In this example, the workload name is web-sfs.

      kubectl get pod | grep web-sfs
      

      Command output:

      web-sfs-***   1/1     Running   0             23m
      
    2. Run the following command to check the mount options (web-sfs-*** is an example pod):

      kubectl exec -it web-sfs-*** -- mount -l | grep nfs
      

      If the mounting information in the command output is consistent with the configured mount options, the mount options have been configured.

      <Your shared path> on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=**.**.**.**,mountvers=3,mountport=2050,mountproto=tcp,local_lock=all,addr=**.**.**.**)