Containers have ephemeral storage by default. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. When a container is deleted, all the files and data inside it are also deleted. To preserve the files, containers use persistent storage volumes.
Because OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework, cluster administrators can provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without specific knowledge of the underlying storage infrastructure.
Two ways exist to provision storage for the cluster: static and dynamic. Static provisioning requires the cluster administrator to create persistent volumes manually. Dynamic provisioning uses storage classes to create the persistent volumes on demand.
Administrators can use storage classes to provide persistent storage. Storage classes describe types of storage for the cluster. Cluster administrators create storage classes to manage storage services or storage tiers of a service. Rather than specifying provisioned storage, PVCs instead refer to a storage class.
Developers use PVCs to add persistent volumes to their applications. Developers need not know details of the storage infrastructure. With static provisioning, developers use previously created PVs, or ask a cluster administrator to manually create persistent volumes for their applications. With dynamic provisioning, developers declare the storage requirements of the application, and the cluster creates a PV to fill the request.
Not all storage is equal. Storage types vary in cost, performance, and reliability. Multiple storage types are usually available for each Kubernetes cluster.
The following list of commonly used storage volume types and their use cases is not exhaustive.
- configMap
The
configMapvolume externalizes the application configuration data. This use of theconfigMapresource ensures that the application configuration is portable across environments and can be version-controlled.- emptyDir
An
emptyDirvolume provides a per-pod directory for scratch data. The directory is usually empty after provisioning.emptyDirvolumes are often required for ephemeral storage.- hostPath
A
hostPathvolume mounts a file or directory from the host node into your pod. To use ahostPathvolume, the cluster administrator must configure pods to run as privileged. This configuration grants access to other pods in the same node.Red Hat does not recommend the use of
hostPathvolumes in production. Instead, Red Hat supportshostPathmounting for development and testing on a single-node cluster. Although most pods do not need ahostPathvolume, it does offer a quick option for testing if an application requires it.- iSCSI
Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access to storage devices. With iSCSI volumes, Kubernetes workloads can consume persistent storage from iSCSI targets.
- local
You can use
Localpersistent volumes to access local storage devices, such as a disk or partition, by using the standard PVC interface.Localvolumes are subject to the availability of the underlying node, and are not suitable for all applications.- NFS
An NFS (Network File System) volume can be accessed from multiple pods at the same time, and thus provides shared data between pods. The
NFSvolume type is commonly used because of its ability to share data safely. Red Hat recommends to use NFS only for non-production systems.
Persistent volume providers vary in capabilities. A volume uses access modes to specify the modes that it supports. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. OpenShift defines the following access modes, as summarized in the following table:
Table 5.1. Volume Access Modes
| Access mode | Abbreviation | Description |
|---|---|---|
ReadWriteOnce
| RWO | A single node mounts the volume as read/write. |
ReadOnlyMany
| ROX | Many nodes mount the volume as read-only. |
ReadWriteMany
| RWX | Many nodes mount the volume as read/write. |
Developers must select a volume type that supports the required access level by the application. The following table shows some example supported access modes:
Table 5.2. Access Mode Support
| Volume type | RWO | ROX | RWX |
|---|---|---|---|
| configMap | Yes | No | No |
| emptyDir | Yes | No | No |
| hostPath | Yes | No | No |
| iSCSI | Yes | Yes | No |
| local | Yes | No | No |
| NFS | Yes | Yes | Yes |
Kubernetes supports two volume modes for persistent volumes: Filesystem and Block.
If the volume mode is not defined for a volume, then Kubernetes assigns the default volume mode, Filesystem, to the volume.
OpenShift Container Platform can provision raw block volumes.
These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or that implement their own storage service.
Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification.
The following table provides examples of storage options with block volume support:
Table 5.3. Block Volume Support
| Volume plug-in | Manually provisioned | Dynamically provisioned |
|---|---|---|
| AWS EBS | Yes | Yes |
| Azure disk | Yes | Yes |
| Cinder | Yes | Yes |
| Fibre channel | Yes | No |
| GCP | Yes | Yes |
| iSCSI | Yes | No |
| local | Yes | No |
| Red Hat OpenShift Data Foundation | Yes | Yes |
| VMware vSphere | Yes | Yes |
Use a PersistentVolume manifest file to manually create a persistent volume.
The following example creates a persistent volume from a fiber channel storage device that uses block mode.
apiVersion: v1 kind: PersistentVolumemetadata: name: block-pv
spec: capacity: storage: 10Gi
accessModes: - ReadWriteOnce
volumeMode: Block
persistentVolumeReclaimPolicy: Retain
fc:
targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false
| |
Provide a name for the PV, which subsequent claims use to access the PV. | |
Specify the amount of storage that is allocated to this volume. | |
The storage device must support the access mode that the PV specifies. | |
The | |
The | |
The remaining attributes are specific to the storage type.
In this example, the |
If the previous manifest is in a file named my-fc-volume.yaml, then the following command can create the PV resource on RHOCP:
[user@host]$ oc create -f my-fc-volume.yamlTo create a persistent volume from the web console, click the → menu.
A persistent volume claim (PVC) resource represents a request from an application for storage. A PVC specifies the minimal storage characteristics, such as capacity and access mode. A PVC does not specify a storage technology, such as iSCSI or NFS.
The lifecycle of a PVC is not tied to a pod, but to a namespace. Multiple pods from the same namespace but with potentially different workload controllers can connect to the same PVC. You can also sequentially connect storage to and detach storage from different application pods, to initialize, convert, migrate, or back up data.
Kubernetes matches each PVC to a persistent volume (PV) resource that can satisfy the requirements of the claim. It is not an exact match. A PVC might be bound to a PV with a larger disk size than is requested. A PVC that specifies single access might be bound to a PV that is shareable for multiple concurrent accesses. Rather than enforcing policy, PVCs declare what an application needs, which Kubernetes provides on a best-effort basis.
A PVC belongs to a specific project. To create a PVC, you must specify the access mode and size, among other options. A PVC cannot be shared between projects. Developers use a PVC to access a persistent volume (PV). Persistent volumes are not exclusive to projects, and are accessible across the entire OpenShift cluster. When a PV binds to a PVC, the PV cannot be bound to another PVC.
To add a volume to an application deployment, create a PersistentVolumeClaim resource, and add it to the application as a volume.
Create the PVC by using either a Kubernetes manifest or the oc set volumes command.
In addition to either creating a PVC or using an existing PVC, the oc set volumes command can modify a deployment to mount the PVC as a volume within the pod.
To add a volume to an application deployment, use the oc set volumes command:
[user@host ~]$oc set volumes deployment/example-application \--add \
--name example-pv-storage \
--type persistentVolumeClaim \
--claim-mode rwo \
--claim-size 15Gi \
--mount-path /var/lib/example-app \
--claim-name example-pv-claim
Specify the name of the deployment that requires the PVC resource. | |
Setting the | |
The | |
The supported types, for the | |
The | |
Create a claim with the given size in bytes, if specified along with the persistent volume type. The size must use SI notation, for example, 15, 15 G, or 15 Gi. | |
The | |
The |
The command creates a PVC resource and adds it to the application as a volume within the pod.
The command updates the deployment for the application with volumeMounts and volumes specifications.
apiVersion: apps/v1 kind: Deployment metadata: ...output omitted... namespace: storage-volumes...output omitted... spec: ...output omitted... template: ...output omitted... spec: ...output omitted...
volumeMounts:- mountPath: /var/lib/example-appname: example-pv-storage
...output omitted...
volumes:- name: example-pv-storagepersistentVolumeClaim: claimName: example-pv-claim
...output omitted...
The deployment, which must be in the same namespace as the PVC. | |
The mount path in the container. | |
The volume name, which is used to specify the volume that is associated with the mount. | |
The claim name that is bound to the volume. |
The following example specifies a PVC by using a YAML manifest to create a PersistentVolumeClaim API object:
--- apiVersion: v1 kind: PersistentVolumeClaimmetadata: name: example-pv-claim
labels: app: example-application spec: accessModes: - ReadWriteOnce
resources: requests: storage: 15Gi
| |
Use this name in the | |
Specify the access mode that this PVC requests. The storage class provisioner must provide this access mode. If persistent volumes are created statically, then an eligible persistent volume must provide this access mode. | |
The storage class creates a persistent volume that matches this size request. If persistent volumes are created statically, then an eligible persistent volume must be at least the requested size. |
Use the oc create command to create the PVC from the manifest file.
[user@host ~]$ oc create -f pvc_file_name.yamlUse oc get pvc to view the available PVCs in the current namespace.
[user@host ~]$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS ...
db-pod-pvc Bound pvc-13...ca45 1Gi RWO nfs-storage ...To create a persistent volume claim from the web console, click the → menu.
Click and complete the form by adding the name, the storage class, the size, the access mode, and the volume mode.
PVs are defined by a PersistentVolume API object, which is from existing storage in the cluster.
The cluster administrator must statically provision some storage types.
Alternatively, the Kubernetes persistent volume framework can use a StorageClass object to dynamically provision PVs.
When you create a PVC, you specify a storage amount, the required access mode, and a storage class to describe and classify the storage. The control loop in the RHOCP control node watches for new PVCs, and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, then a provisioner for the storage class creates one.
Claims remain unbound indefinitely if a matching volume does not exist or if a volume cannot be created with any available provisioner that services a storage class. Claims are bound when matching volumes become available. For example, a cluster with many manually provisioned 50 Gi volumes would not match a PVC that requests 100 Gi. The PVC can be bound when a 100 Gi PV is added to the cluster.
Use oc get storageclass to view the storage classes that the cluster provides.
[user@host ~]$ oc get storageclass
NAME PROVISIONER ...
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner ...
lvms-vg1 topolvm.io ...In the example, the nfs-storage storage class is marked as the default storage class.
When a default storage class is configured, the PVC must explicitly name any other storage class to use, or can set the storageClassName annotation to "", to be bound to a PV without a storage class.
The following oc set volume command example uses the claim-class option to specify a dynamically provisioned PV.
[user@host ~]$ oc set volumes deployment/example-application \
--add --name example-pv-storage --type pvc --claim-class nfs-storage \
--claim-mode rwo --claim-size 15Gi --mount-path /var/lib/example-app \
--claim-name example-pv-claimNote
Because a cluster administrator can change the default storage class, Red Hat recommends that you always specify the storage class when you create a PVC.
When you create a PVC, you request a specific amount of storage, access mode, and storage class. Kubernetes binds the PVC to an appropriate PV. If an appropriate PV does not exist, then a provisioner for the storage class creates one.
PVs follow a lifecycle based on their relationship to the PVC.
- Available
After a PV is created, it becomes available for any PVC to use in the cluster in any namespace.
- Bound
A PV that is bound to a PVC is also bound to the same namespace as the PVC, and no other PVC can use it.
- In Use
You can delete a PVC if no pods actively use it. The Storage Object in Use Protection feature prevents the removal of bound PVs and PVCs that pods are actively using. Such removal can result in data loss. Storage Object in Use Protection is enabled by default.
If a user deletes a PVC that a pod actively uses, then the PVC is not removed immediately. PVC removal is postponed until no pods actively use the PVC. Also, if a cluster administrator deletes a PV that is bound to a PVC, then the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
- Released
After the developer deletes the PVC that is bound to a PV, the PV is released, and the storage that the PV used can be reclaimed.
- Reclaimed
The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain or Delete.
Table 5.4. Volume Reclaim Policy
Policy Description Retain Enables manual reclamation of the resource for those volume plug-ins that support it. Delete Deletes both the PersistentVolumeobject from OpenShift Container Platform and the associated storage asset in external infrastructure.
To delete a volume, use the oc delete command to delete the PVC.
The storage class reclaims the volume after removing the PVC.
[user@host ~]$ oc delete pvc/example-pvc-storageReferences
For more information, refer to the Understanding Ephemeral Storage chapter in the Red Hat OpenShift Container Platform 4.14 Storage documentation at https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/storage/index#storage-ephemeral-storage-overview_understanding-ephemeral-storage
For more information, refer to the Understanding Persistent Storage chapter in the Red Hat OpenShift Container Platform 4.14 Storage documentation at https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/storage/index#pvcprotection_understanding-persistent-storage
https://loft.sh/blog/kubernetes-persistent-volumes-examples-and-best-practices/