Outcomes
In this exercise, you deploy a web server with a shared persistent volume between the replicas, and a database server from a stateful set with dedicated persistent volumes for each instance.
Deploy a web server with persistent storage.
Add data to the persistent storage.
Scale the web server deployment and observe the data that is shared with the replicas.
Create a database server with a stateful set by using a YAML manifest file.
Verify that each instance from the stateful set has a persistent volume claim.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that all resources are available for this exercise.
[student@workstation ~]$ lab start storage-statefulsets
Instructions
Create a web server deployment named
web-server. Use theregistry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latestcontainer image.Log in to the OpenShift cluster as the
developeruser with thedeveloperpassword.[student@workstation]$
oc login -u developer -p developer \ https://api.ocp4.example.com:6443...output omitted...Change to the
storage-statefulsetsproject.[student@workstation]$
oc project storage-statefulsetsNow using project "storage-statefulsets" on server ...output omitted...Create the
web-serverdeployment.[student@workstation ~]$
oc create deployment web-server \ --image registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latestdeployment.apps/web-server createdVerify the deployment status.
[student@workstation ~]$
oc get pods -l app=web-serverNAME READY STATUS RESTARTS AGE web-server-7d7cb4cdc7-t7hx8 1/1 Running 0 4s
Add the
web-pvpersistent volume to theweb-serverdeployment. Use the default storage class and the following information to create the persistent volume:Field Value Name web-pvType persistentVolumeClaimClaim mode rwoClaim size 5GiMount path /usr/share/nginx/htmlClaim name web-pv-claimAdd the
web-pvpersistent volume to theweb-serverdeployment.[student@workstation ~]$
oc set volumes deployment/web-server \ --add --name web-pv --type persistentVolumeClaim --claim-mode rwo \ --claim-size 5Gi --mount-path /usr/share/nginx/html --claim-name web-pv-claimdeployment.apps/web-server volume updatedBecause a storage class was not specified with the
--claim-classoption, the command uses the default storage class to create the persistent volume.Verify the deployment status. Notice that a new pod is created.
[student@workstation ~]$
oc get pods -l app=web-serverNAME READY STATUS RESTARTS AGE web-server-64689877c6-mdr6f 1/1 Running 0 5sVerify the persistent volume status.
[student@workstation ~]$
oc get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE web-pv-claim Bound pvc-42...63ab 5Gi RWO nfs-storage 29sThe default storage class,
nfs-storage, provisioned the persistent volume.
Add data to the PV by using the
execcommand.List pods to retrieve the
web-serverpod name.[student@workstation ~]$
oc get podsNAME READY STATUS RESTARTS AGE web-server-64689877c6-mdr6f 1/1 Running 0 17mThe pod name might differ in your output.
Use the
execcommand to add the pod name that you retrieved from the previous step to the/usr/share/nginx/html/index.htmlfile on the pod. Then, retrieve the contents of the/var/www/hmtl/index.htmlfile to confirm that the pod name is in the file.[student@workstation ~]$
oc exec -it pod/web-server-64689877c6-mdr6f\ -- /bin/bash -c \ 'echo "Hello, World from ${HOSTNAME}" > /usr/share/nginx/html/index.html'[student@workstation ~]$
oc exec -it pod/web-server-Hello, World from web-server-64689877c6-mdr6f64689877c6-mdr6f\ -- cat /usr/share/nginx/html/index.html
Scale the
web-serverdeployment to two replicas and confirm that an additional pod is created.Scale the
web-serverdeployment to two replicas.[student@workstation ~]$
oc scale deployment web-server --replicas 2deployment.apps/web-server scaledVerify the replica status and retrieve the pod names.
[student@workstation ~]$
oc get podsNAME READY STATUS RESTARTS AGE web-server-64689877c6-mbj6g 1/1 Running 0 2s web-server-64689877c6-mdr6f 1/1 Running 0 17mThe pod names might differ from your output.
Retrieve the content of the
/usr/share/nginx/html/index.htmlfile on theweb-serverpods by using theoc execcommand to verify that the file is the same in both pods.Verify that the
/usr/share/nginx/html/index.htmlfile is the same in both pods.[student@workstation ~]$
oc exec -it pod/web-server-Hello, World from web-server-64689877c6-mdr6f64689877c6-mbj6g\ -- cat /usr/share/nginx/html/index.html[student@workstation ~]$
oc exec -it pod/web-server-Hello, World from web-server-64689877c6-mdr6f64689877c6-mdr6f\ -- cat /usr/share/nginx/html/index.htmlNotice that both files show the name of the first instance, because they share the persistent volume.
Create a database server with a stateful set by using the
statefulset-db.ymlfile in the/home/student/DO180/labs/storage-statefulsetsdirectory. Update the file with the following information:Field Value metadata.namedbserverspec.selector.matchLabels.appdatabasespec.template.metadata.labels.appdatabasespec.template.spec.containers.namedbserverspec.template.spec.containers.volumeMounts.namedataspec.template.spec.containers.volumeMounts.mountPath/var/lib/mysqlspec.volumeClaimTemplates.metadata.namedataspec.volumeClaimTemplates.spec.storageClassNamelvms-vg1Open the
/home/student/DO180/labs/storage-statefulsets/statefulset-db.ymlfile in an editor. Replace the<CHANGE_ME>objects with values from the previous table:apiVersion: apps/v1 kind: StatefulSet metadata: name:
dbserverspec: selector: matchLabels: app:databasereplicas: 2 template: metadata: labels: app:databasespec: terminationGracePeriodSeconds: 10 containers: - name:dbserverimage: registry.ocp4.example.com:8443/redhattraining/mysql-app:v1 ports: - name: database containerPort: 3306 env: - name: MYSQL_USER value: "redhat" - name: MYSQL_PASSWORD value: "redhat123" - name: MYSQL_DATABASE value: "sakila" volumeMounts: - name:datamountPath:/var/lib/mysqlvolumeClaimTemplates: - metadata: name:dataspec: accessModes: [ "ReadWriteOnce" ] storageClassName:"lvms-vg1"resources: requests: storage: 1GiCreate the database server by using the
oc create -f /home/student/DO180/labs/storage-statefulsets/statefulset-db.ymlcommand.[student@workstation ~]$
oc create -f \ /home/student/DO180/labs/storage-statefulsets/statefulset-db.ymlstatefulset.apps/bdserver createdWait a few moments and then verify the status of the stateful set and its instances.
[student@workstation ~]$
oc get statefulsetNAME READY AGE dbserver 2/2 10s[student@workstation ~]$
oc get pods -l app=databaseNAME READY STATUS ... dbserver-0 1/1 Running ... dbserver-1 1/1 Running ...Use the
execcommand to add data to each of the stateful set pods.[student@workstation ~]$
oc exec -it pod/dbserver-0 -- /bin/bash -c \ "mysql -uredhat -predhat123 sakila -e 'create table items (count INT);'"mysql: [Warning] Using a password on the command line interface can be insecure.[student@workstation ~]$
oc exec -it pod/dbserver-1 -- /bin/bash -c \ "mysql -uredhat -predhat123 sakila -e 'create table inventory (count INT);'"mysql: [Warning] Using a password on the command line interface can be insecure.
Confirm that each instance from the
dbserverstateful set has a persistent volume claim. Then, verify that each persistent volume claim contains unique data.Confirm that the persistent volume claims have a
Boundstatus.[student@workstation ~]$
oc get pvc -l app=databaseNAME STATUS ... CAPACITY ACCESS MODE ... data-dbserver-0 Bound ... 1Gi RWO ... data-dbserver-1 Bound ... 1Gi RWO ...Verify that each instance from the
dbserverstateful set has its own persistent volume claim by using theoc get podcommand.pod-name-o json | jq .spec.volumes[0].persistentVolumeClaim.claimName[student@workstation ~]$
oc get pod dbserver-0 -o json | \ jq .spec.volumes[0].persistentVolumeClaim.claimName"data-dbserver-0"[student@workstation ~]$
oc get pod dbserver-1 -o json | \ jq .spec.volumes[0].persistentVolumeClaim.claimName"data-dbserver-1"Application-level clustering is not enabled for the
dbserverstateful set. Verify that each instance of thedbserverstateful set has unique data.[student@workstation ~]$
oc exec -it pod/dbserver-0 -- /bin/bash -c \ "mysql -uredhat -predhat123 sakila -e 'show tables;'"mysql: [Warning] Using a password on the command line interface can be insecure. ------------------ | Tables_in_sakila | ------------------ | items | ------------------[student@workstation ~]$
oc exec -it pod/dbserver-1 -- /bin/bash -c \ "mysql -uredhat -predhat123 sakila -e 'show tables;'"mysql: [Warning] Using a password on the command line interface can be insecure. ------------------ | Tables_in_sakila | ------------------ | inventory | ------------------
Delete a pod in the
dbserverstateful set. Confirm that a new pod is created and that the pod uses the PVC from the previous pod. Verify that the previously added table exists in thesakiladatabase.Delete the
dbserver-0pod in thedbserverstateful set. Confirm that a new pod is generated for the stateful set. Then, confirm that thedata-dbserver-0PVC still exists.[student@workstation ~]$
oc delete pod dbserver-0pod "dbserver-0" deleted[student@workstation ~]$
oc get pods -l app=databaseNAME READY STATUS RESTARTS AGE dbserver-0 1/1 Running 0 4s dbserver-1 1/1 Running 0 5m[student@workstation ~]$
oc get pvc -l app=databaseNAME STATUS ... CAPACITY ACCESS MODE ... data-dbserver-0 Bound ... 1Gi RWO ... data-dbserver-1 Bound ... 1Gi RWO ...Use the
execcommand to verify that the newdbserver-0pod has theitemstable in thesakiladatabase.[student@workstation ~]$
oc exec -it pod/dbserver-0 -- /bin/bash -c \ "mysql -uredhat -predhat123 sakila -e 'show tables;'"mysql: [Warning] Using a password on the command line interface can be insecure. ------------------ | Tables_in_sakila | ------------------ | items | ------------------