The error “1 node(s) had volume node affinity conflict” indicates a mismatch between the PersistentVolume (PV) node affinity and the node(s) where the pod is scheduled. The nodeAffinity in your PV specifies that the volume can only be attached to a node with the label kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu=master. If no such node exists or your pod is not scheduled to the correct node, the pod will remain in a pending state.
Here I am adding step to verify all the configuration and solution if any configuration is missing
Ensure that the node where the pod is intended to run has the required label:
kubectl get nodes --show-labels
If the label kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu=master is missing from the appropriate node, add it manually:
kubectl label nodekubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu=master
Verify the label has been applied:
kubectl get nodes --show-labels | grep master
Ensure the nodeAffinity in the PersistentVolume is correctly configured. Based on your configuration:
nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu operator: In values: - master
This affinity restricts the volume to nodes with the label kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu=master. Ensure this label exists on at least one node.
Confirm that the PersistentVolumeClaim (PVC) is correctly bound to the PV. Check its status:
kubectl get pvc mariadb-claim0 -o yaml
The status.phase should be Bound, and the spec.volumeName should match the PersistentVolume name (mariadb-pv0).
If the PVC is not bound:
Ensure the StorageClass named local-storage exists and is correctly configured:
kubectl get storageclass
If the StorageClass does not exist or is misconfigured, create or update it.
Verify the pod’s deployment does not have conflicting affinity rules. If the pod requires the volume with node affinity, ensure the pod is scheduled to the correct node. You can add a nodeSelector to the pod spec:
spec: nodeSelector: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu: master
After ensuring the above configurations:
1. Delete the existing pod:
kubectl delete pod
2. Allow the deployment to recreate the pod, or manually create it.
3. Check the pod status:
kubectl get pods kubectl describe pod
If the issue persists, recheck all configurations (PV, PVC, StorageClass, and Node labels).
Here’s how your corrected PersistentVolume might look:
apiVersion: v1 kind: PersistentVolume metadata: name: mariadb-pv0 labels: io.kompose.service: mariadb-pv0 spec: volumeMode: Filesystem storageClassName: local-storage local: path: "/home/gtcontainer/applications/data/db/mariadb" accessModes: - ReadWriteOnce capacity: storage: 2Gi claimRef: namespace: default name: mariadb-claim0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu operator: In values: - master