我做了个实验:
1、先创建一个pv,大小1Gi
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv01
namespace: test
labels:
pv: nfs-pv01
spec:
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 1Gi # 指定pv大小
accessModes:
- ReadWriteMany
nfs:
server: 192.168.2.53
path: /nfs_data/test
root@k8s-master1:~/AllYamls/pv_pvc# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv01 1Gi RWX Recycle Available 5s
2、创建两个pvc,大小都是100Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc02 只修改这个地方
namespace: test
spec:
resources:
requests:
storage: 100Mi
limits:
storage: 100Mi
accessModes:
- ReadWriteMany
selector:
matchLabels:
pv: nfs-pv01
结果发现只有一个pvc和pv绑定了
root@k8s-master1:~/AllYamls/pv_pvc# kubectl get pvc -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc01 Bound nfs-pv01 1Gi RWX 4m24s
nfs-pvc02 Pending 4m
root@k8s-master1:~/AllYamls/pv_pvc# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv01 1Gi RWX Recycle Bound test/nfs-pvc01 5m15s
从上面的pvc结果来看,虽然申请pvc的时候限制100Mi,但是从结果上看 nfs-pvc01的CAPACITY字段的值是1Gi。好像那个限制并没有起作用。
是不是nfs-pvc01这pvc实际大小还是1Gi,导致第二个pvc没法挂载成功