Dynamic Provisioning of FlashArray File Services
Use PX-CSI to dynamically provision file-based volumes using FlashArray file services. This page walks you through creating a StorageClass, provisioning a PersistentVolumeClaim (PVC), and mounting it to a pod.
Create a StorageClass
To enable dynamic provisioning on FlashArray file services, define a StorageClass with the appropriate backend and NFS configuration.
For FlashArray file services, set the backend type to "pure_fa_file". You can also configure parameters like quota policy, mount options, and topology settings.
-
Ensure that you have configured FlashArray to use file services. For more information, see Configure FlashArray file services
-
If you configure an NFS policy with
root_squashand your pod specifies anfsGroup, you might see permission errors (for example,permission deniedorlchown failed) during volume mount because the root user is mapped tonfsnobody. To avoid this:- Ensure that the NFS policy uses
no_root_squashaccess. - Esure that the User Mapping Enabled field is set to Disabled when creating the NFS policy.
For more information, see Configure FlashArray File Services.
- Ensure that the NFS policy uses
-
Define a
StorageClasswith the appropriate storage type and performance settings. For FlashArray file system, the backend type ispure_fa_file.Required parameters:
backend: "pure_fa_file"- Specifies that the volume is an FA file volume.pure_nfs_policy- PX-CSI expects that the NFS policy is pre-created on FA setups. If the policy does not exist, the request will fail.pure_fa_file_system- Specifies the file system where the volume needs to be placed. If the file system does not exist in the FlashArray setup, the volume create request fails.
Optional parameters:
pure_quota_policy- If provided, associates the volume with a quota policy to enforce a size limit.pure_nfs_endpoint- Used when there are multiple endpoints per array. Overrides the defaultNFSEndPointspecified inpure.json.allowedTopologies- Uses topology labels to select arrays with matching labels for volume placement.volumeBindingMode: If you have enabled CSI topology, ensure you specify thevolumeBindingMode: WaitForFirstConsumerparameter along withallowedTopologies. ThevolumeBindingMode: WaitForFirstConsumerdelays volume binding until the Kubernetes scheduler selects a suitable node that matches theallowedTopologieslabels.mountOptions- Overrides default mount options. Supports only TCP, not UDP. You can also specify multiple security options using themountOptions.secfield . By default, NFS usessec=auth_sys, but support is also available for Kerberos-based authentication options, includingsec=krb5(authentication only),sec=krb5i(authentication and integrity), andsec=krb5p(authentication, integrity, and encryption).
Example
StorageClassYAML:- IPv4
- IPv6
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fa-file-sc
provisioner: pxd.portworx.com
parameters:
backend: "pure_fa_file"
pure_nfs_policy: "test-policy"
pure_fa_file_system: "name01"
pure_quota_policy: "100g_policy"
pure_nfs_endpoint: <nfs-endpoints-of-fa>
mountOptions:
- nfsvers=3
- proto=tcp
# (Optional) Below lines are required only if you are using CSI topology
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.portworx.io/zone
values:
- <zone-1>
- key: topology.portworx.io/region
values:
- <region-1>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fa-file-sc
provisioner: pxd.portworx.com
parameters:
backend: "pure_fa_file"
pure_nfs_policy: "test-policy"
pure_fa_file_system: "name01"
pure_quota_policy: "100g_policy"
pure_nfs_endpoint: <nfs-endpoints-of-fa>
mountOptions:
- nfsvers=4.1
- proto=tcp6
# (Optional) Below lines are required only if you are using CSI topology
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.portworx.io/zone
values:
- <zone-1>
- key: topology.portworx.io/region
values:
- <region-1> -
Apply this YAML to your cluster:
kubectl apply -f sc.yamlstorageclass.storage.k8s.io/fa-file-sc created
Create a PVC
Define a PersistentVolumeClaim (PVC) that references the fa-file-sc StorageClass.
-
To create a PVC, define the specifications and reference the StorageClass you previously created by specifying its name in the
spec.storageClassNamefield.Example PVC specification:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pure-claim-fa
labels:
app: nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: fa-file-scSave this YAML in a file
pvc.yaml. -
Apply this YAML to your cluster:
kubectl apply -f pvc.yamlpersistentvolumeclaim/pure-claim-fa created
Mount a PVC to a pod
Attach the PVC to a pod by referencing it in the volumes section and mounting it inside the container:
-
Create a Pod and specify the PVC name in the
persistentVolumeClaim.claimNamefield. Here is an example pod specification:kind: Pod
apiVersion: v1
metadata:
name: nginx-pod
labels:
app: nginx
spec:
volumes:
- name: pure-vol
persistentVolumeClaim:
claimName: pure-claim-fa
containers:
- name: nginx
image: nginx
volumeMounts:
- name: pure-vol
mountPath: /data
ports:
- containerPort: 80 -
(Optional) To control pod scheduling based on node labels, add the
nodeAffinityfield to the Pod specification. For example:spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.portworx.io/zone
operator: In
values:
- zone-0
- key: topology.portworx.io/region
operator: In
values:
- region-0
Verify pod status
Check pod readiness and confirm the volume is mounted:
watch kubectl get pods
When the pod status shows Running, it is actively using the provisioned FlashArray file volume.