The Container Storage Interface (CSI) NFS driver allows Kubernetes to dynamically provision and manage NFS volumes without manual PersistentVolume (PV) creation. This guide walks you through installing the CSI NFS driver and creating a StorageClass
for your NFS server.
Prerequisites
- Kubernetes cluster (v1.20+ recommended) with
kubectl
access. - A working NFS server reachable from all cluster nodes.
- Administrative privileges on the cluster.
1️⃣ Install the CSI NFS Driver
The CSI NFS driver is maintained by the Kubernetes CSI project. To deploy it, run:
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v4.11.0/deploy/install-driver.sh | bash -s v4.11.0 --
This script:
- Creates the required RBAC roles and service accounts.
- Deploys the CSI controller and node plugins across your cluster.
Check that the pods are running:
kubectl get pods -n kube-system -l app=csi-nfs-controller
kubectl get daemonset csi-nfs-node -n kube-system
All pods should show Running
or READY
status.
2️⃣ Create a StorageClass
A StorageClass
tells Kubernetes how to dynamically create PersistentVolumes using NFS.
Create a file named nfssc.yaml
with the following content:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: 192.168.0.133 # Replace with your NFS server IP or hostname
share: /data/nfs # Replace with your NFS export path
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1
Apply it:
kubectl apply -f nfssc.yaml
Verify:
kubectl get storageclass
You should see nfs-csi
in the list.
3️⃣ Use the NFS StorageClass
Create a PersistentVolumeClaim (PVC) that uses the new StorageClass
. For example, save as pvc-nfs.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: nfs-csi
Apply it:
kubectl apply -f pvc-nfs.yaml
Check the PVC status:
kubectl get pvc nfs-pvc
Once it’s Bound
, you can mount it in any pod:
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: nfs-pvc
4️⃣ Test the Volume
Deploy a simple pod to confirm read/write access:
apiVersion: v1
kind: Pod
metadata:
name: nfs-test
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh"]
args: ["-c", "sleep 3600"]
volumeMounts:
- mountPath: /mnt/nfs
name: nfs-volume
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: nfs-pvc
Apply and then exec into the pod:
kubectl exec -it nfs-test -- sh
echo "Hello from NFS" > /mnt/nfs/hello.txt
cat /mnt/nfs/hello.txt
If you see the text echoed back, the NFS CSI driver is working correctly.
✅ Tips & Troubleshooting
- Firewall: Ensure NFS ports (2049/TCP) are open between all worker nodes and the NFS server.
- Permissions: The NFS export must allow read/write access from your Kubernetes node IPs.
- Driver logs: For debugging, check logs of
csi-nfs-node
andcsi-nfs-controller
pods.
Conclusion
By installing the CSI NFS driver and configuring a StorageClass
, you enable dynamic NFS provisioning in your Kubernetes cluster. This approach simplifies storage management and supports multiple pods reading/writing to the same volume (ReadWriteMany
), making it ideal for shared data or content management systems.