Dynamic PV provisioning with NFS in k8s

Introduction

In Kubernetes, managing persistent storage for Pods traditionally involves a two-step process: creating a PersistentVolume (PV) and then a PersistentVolumeClaim (PVC) that references it. This can be cumbersome, especially for testing. Fortunately, Kubernetes supports automatic PVC provisioning using StorageClasses. When you create a PVC with a provisioning StorageClass, a backing PV is automatically created for you, streamlining the process.

So we are going to use NFS auto provisioning in this article.

Steps

Installation of NFS server on one of the cluster’s nodes.

Pick up one of the nodes and make it an NFS server, so the data of your PVCs will be stored at that server.

Install NFS

# On NFS server
sudo apt install nfs-kernel-server
# Create folder for sharing it across your nodes
mkdir /mnt/share_folder
# Asign the folder to nobody so every one can use it.
chown nobody:nogroup /mnt/share_folder

Configure NFS

# Share the folder
cat > /etc/exports << EOF
/mnt/share_folder     *(rw,sync,no_wdelay,no_root_squash,insecure)
EOF
# Apply the configs
sudo exportfs -rav

Install the NFS client tool

# Install nfs-common on your all worker nodes (because they need to have it in order to mount the volume)
sudo apt install nfs-common

Note: the NFS server is exposed for everyone now if you are using it for production make sure to use firewalls to avoid other’s access to your data.

Install NFS provisioner

# Install the NFS provisioner (replace the 10.3.243.101 IP with yours)
cat > deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.3.243.101
            - name: NFS_PATH
              value: /mnt/share_folder
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.3.243.101
            path: /mnt/share_folder
EOF
# Now apply the configs to your cluster
kubectl apply -f deployment.yaml

# Now 
cat > rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF
# Apply the rbac
kubectl apply -f rbac.yaml

# StorageClass
cat > class.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
EOF
kubectl apply -f class.yaml

Now we need to test our provisioner

Create a test PVC

cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
EOF

It should be bounded with created PV by the NFS provisioner automatically.

Resources

nfs-subdir-external-provisioner

Leave a Reply

Your email address will not be published. Required fields are marked *