Outcomes
Create a pod with a single container, and identify the pod and its container within the container engine of an OpenShift node.
View the logs of a running container.
Retrieve information inside a container, such as the operating system (OS) release and running processes.
Identify the process ID (PID) and namespaces for a container.
Identify the User ID (UID) and supplemental group ID (GID) ranges of a project.
Compare the namespaces of containers in one pod versus in another pod.
Inspect a pod with multiple containers, and identify the purpose of each container.
As the student user on the workstation machine, use the lab command to prepare your system for this exercise.
This command ensures that all resources are available for this exercise.
[student@workstation ~]$ lab start pods-containers
Instructions
Log in to the OpenShift cluster and create the
pods-containersproject. Determine the UID and GID ranges for pods in thepods-containersproject.Log in to the OpenShift cluster as the
developeruser with theoccommand.[student@workstation ~]$
oc login -u developer -p developer \ https://api.ocp4.example.com:6443Login successful ...output omitted...Create the
pods-containersproject.[student@workstation ~]$
oc new-project pods-containersNow using project "pods-containers" on server "https://api.ocp4.example.com:6443". ...output omitted...Identify the UID and GID ranges for pods in the
pods-containersproject.[student@workstation ~]$
oc describe project pods-containersName: pods-containers Created: 28 seconds ago Labels: kubernetes.io/metadata.name=pods-containers pod-security.kubernetes.io/audit=restricted pod-security.kubernetes.io/audit-version=v1.24 pod-security.kubernetes.io/warn=restricted pod-security.kubernetes.io/warn-version=v1.24 Annotations: openshift.io/description= openshift.io/display-name= openshift.io/requester=developer openshift.io/sa.scc.mcs=s0:c28,c22openshift.io/sa.scc.supplemental-groups=1000800000/10000openshift.io/sa.scc.uid-range=1000800000/10000Display Name: <none> Description: <none> Status: Active Node Selector: <none> Quota: <none> Resource limits: <none>Your UID and GID range values might differ from the previous output.
As the
developeruser, create a pod calledubi9-userfrom a UBI9 base container image. The image is available in theregistry.ocp4.example.com:8443/ubi9/ubicontainer registry. Set the restart policy toNeverand start an interactive session. Configure the pod to execute thewhoamiandidcommands to determine the UIDs, supplemental groups, and GIDs of the container user in the pod. Delete the pod afterward.After the
ubi-userpod is deleted, log in as theadminuser and then re-create theubi9-userpod. Retrieve the UIDs and GIDs of the container user. Compare the values to the values of theubi9-userpod that thedeveloperuser created.Afterward, delete the
ubi9-userpod.Use the
oc runcommand to create theubi9-userpod. Configure the pod to execute thewhoamiandidcommands through an interactive bash shell session.[student@workstation ~]$
oc run -it ubi9-user --restart 'Never' \ --image registry.ocp4.example.com:8443/ubi9/ubi \ -- /bin/bash -c "whoami && id"1000800000 uid=1000800000(1000800000) gid=0(root) groups=0(root),1000800000Your values might differ from the previous output.
Notice that the user in the container has the same UID that is identified in the
pods-containersproject. However, the GID of the user in the container is0, which means that the user belongs to therootgroup. Any files and directories that the container processes might write to must have read and write permissions byGID=0and have therootgroup as the owner.Although the user in the container belongs to the
rootgroup, a UID value over1000means that the user is an unprivileged account. When a regular OpenShift user, such as thedeveloperuser, creates a pod, the containers within the pod run as unprivileged accounts.Delete the pod.
[student@workstation ~]$
oc delete pod ubi9-userpod "ubi9-user" deletedLog in as the
adminuser with theredhatocppassword.[student@workstation ~]$
oc login -u admin -p redhatocpLogin successful. You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "pods-containers".Re-create the
ubi9-userpod as theadminuser. Configure the pod to execute thewhoamiandidcommands through an interactive bash shell session. Compare the values of the UID and GID for the container user to the values of theubi9-userpod that thedeveloperuser created.Note
It is safe to ignore pod security warnings when using a cluster-admin user that creates unmanaged pods. The
adminuser can create priviliged pods that the Security Context Constraints controller does not manage.[student@workstation ~]$
oc run -it ubi9-user --restart 'Never' \ --image registry.ocp4.example.com:8443/ubi9/ubi \ -- /bin/bash -c "whoami && id"Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "ubi9-user" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "ubi9-user" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "ubi9-user" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "ubi9-user" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") root uid=0(root) gid=0(root) groups=0(root)Notice that the value of the UID is
0, which differs from the UID range value of thepod-containersproject. The user in the container is the privileged accountrootuser and belongs to therootgroup. When a cluster administrator creates a pod, the containers within the pod run as a privileged account by default.Delete the
ubi9-userpod.[student@workstation ~]$
oc delete pod ubi9-userpod "ubi9-user" deleted
As the
developeruser, use theoc runcommand to create aubi9-datepod from a UBI9 base container image. The image is available in theregistry.ocp4.example.com:8443/ubi9/ubicontainer registry. Set the restart policy toNever, and configure the pod to execute thedatecommand. Retrieve the logs of theubi9-datepod to confirm that thedatecommand executed. Delete the pod afterward.Log in as the
developeruser with thedeveloperpassword.[student@workstation ~]$
oc login -u developer -p developerLogin successful. You have one project on this server: "pods-containers" Using project "pods-containers".Create a pod called
ubi9-datethat executes thedatecommand.[student@workstation ~]$
oc run ubi9-date --restart 'Never' \ --image registry.ocp4.example.com:8443/ubi9/ubi -- datepod/ubi9-date createdWait a few moments for the creation of the pod. Then, retrieve the logs of the
ubi9-datepod.[student@workstation ~]$
oc logs ubi9-dateMon Nov 28 15:02:55 UTC 2022Delete the
ubi9-datepod.[student@workstation ~]$
oc delete pod ubi9-datepod "ubi9-date" deleted
Use the
oc run ubi9-command -itcommand to create aubi9-commandpod with theregistry.ocp4.example.com:8443/ubi9/ubicontainer image. Add the/bin/bashin theoc runcommand to start an interactive shell. Exit the pods and view the logs for theubi9-commandpod with theoc logscommand. Then, connect to theubi9-commandpod with theoc attachcommand, and issue the following command:while true; do echo $(date); sleep 2; doneThis command executes the
dateandsleepcommands to generate output to the console every two seconds. Use theoc logscommand to retrieve the logs of theubi9pod, and confirm that the logs display the executeddateandsleepcommands.Create a pod called
ubi9-commandand start an interactive shell.[student@workstation ~]$
oc run ubi9-command -it \ --image registry.ocp4.example.com:8443/ubi9/ubi -- /bin/bashIf you don't see a command prompt, try pressing enter. bash-5.1$Exit the shell session.
bash-5.1$
exitexit Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command when the pod is runningUse the
oc logscommand to view the logs of theubi9-commandpod.[student@workstation ~]$
oc logs ubi9-commandbash-5.1$ [student@workstation ~]$The pod's command prompt is returned. The
oc logscommand displays the pod's currentstdoutandstderroutput in the console. Because you disconnected from the interactive session, the pod's currentstdoutis the command prompt, and not the commands that you executed previously.Use the
oc attachcommand to connect to theubi9-commandpod again. In the shell, execute thewhile true; do echo $(date); sleep 2; donecommand to continuously generatestdoutoutput.[student@workstation ~]$
oc attach ubi9-command -itIf you don't see a command prompt, try pressing enter.bash-5.1$
while true; do echo $(date); sleep 2; doneMon Nov 28 15:15:16 UTC 2022 Mon Nov 28 15:15:18 UTC 2022 Mon Nov 28 15:15:20 UTC 2022 Mon Nov 28 15:15:22 UTC 2022 ...output omitted...Open another terminal window and view the logs for the
ubi9-commandpod with theoc logscommand. Limit the log output to the last 10 entries with the--tailoption. Confirm that the logs display the results of the command that you executed in the container.[student@workstation ~]$
oc logs ubi9-command --tail=10Mon Nov 28 15:15:16 UTC 2022 Mon Nov 28 15:15:18 UTC 2022 Mon Nov 28 15:15:20 UTC 2022 Mon Nov 28 15:15:22 UTC 2022 Mon Nov 28 15:15:24 UTC 2022 Mon Nov 28 15:15:26 UTC 2022 Mon Nov 28 15:15:28 UTC 2022 Mon Nov 28 15:15:30 UTC 2022 Mon Nov 28 15:15:32 UTC 2022 Mon Nov 28 15:15:34 UTC 2022
Identify the name for the container in the
ubi9-commandpod. Identify the process ID (PID) for the container in theubi9-commandpod by using a debug pod for the pod's host node. Use thecrictlcommand to identify the PID of the container in theubi9-commandpod. Then, retrieve the PID of the container in the debug pod.Identify the container name in the
ubi9-commandpod with theoc getcommand. Specify the JSON format for the command output. Parse the JSON output with thejqcommand to retrieve the value of the.status.containerStatuses[].nameobject.[student@workstation ~]$
oc get pod ubi9-command -o json | \ jq .status.containerStatuses[].name"ubi9-command"The
ubi9-commandpod has a single container of the same name.Find the host node for the
ubi9-commandpod. Start a debug pod for the host with theoc debugcommand.[student@workstation ~]$
oc get pods ubi9-command -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ubi9-command 1/1 Running 2 (16m ago) 27m 10.8.0.26 master01 <none> <none>[student@workstation ~]$
oc debug node/master01Error from server (Forbidden): nodes "master01" is forbidden: User "developer" cannot get resource "nodes" in API group "" at the cluster scopeThe debug pod fails because the
developeruser does not have the required permission to debug a host node.Log in as the
adminuser with theredhatocppassword. Start a debug pod for the host with theoc debugcommand. After connecting to the debug pod, run thechroot /hostcommand to use host binaries, such as thecrictlcommand-line tool.[student@workstation ~]$
oc login -u admin -p redhatocpLogin successful. ...output omitted...[student@workstation ~]$
oc debug node/master01Starting pod/master01-debug ... To use host binaries, run `chroot /host` Pod IP: 192.168.50.10 If you don't see a command prompt, try pressing entersh-4.4#
chroot /hostUse the
crictl pscommand to retrieve theubi9-commandcontainer ID. Specify theubi9-commandcontainer with the--nameoption and use the JSON output format. Parse the JSON output with thejq -rcommand to get the RAW JSON output. Export the container ID as the$CIDenvironment variable.Note
When using
jqwithout the-rflag, the container ID is wrapped in double quotes, which does not work withcrictlcommands. If the-rflag is not used, then you can add| tr -d '"'to the end of the command to trim the double quotes.sh-5.1#
crictl ps --name ubi9-command -o json | jq -r .containers[0].id81adbc6222d79ed9ba195af4e9d36309c18bb71bc04b2e8b5612be632220e0d6sh-5.1#
CID=$(crictl ps --name ubi9-command -o json | jq -r .containers[0].id)sh-5.1#
echo $CID81adbc6222d79ed9ba195af4e9d36309c18bb71bc04b2e8b5612be632220e0d6Your container ID value might differ from the previous output.
Use the
crictl inspectcommand to find the PID of theubi9-commandcontainer. The PID value is in the.info.pidobject in thecrictl inspectoutput. Export theubi9-commandcontainer PID as the$PIDenvironment variable.sh-5.1#
crictl inspect $CID | grep pid"pid": 365297, "pids": { "type": "pid" ...output omitted... } ...output omitted...sh-5.1#
PID=365297Your PID values might differ from the previous output.
Use the
lsnscommand to list the system namespaces of theubi9-commandcontainer. Confirm that the running processes in the container are isolated to different system namespaces.View the system namespaces of the
ubi9-commandcontainer with thelsnscommand. Specify the PID with the-poption and use the$PIDenvironment variable. In the resulting table, theNScolumn contains the namespace values for the container.sh-5.1#
lsns -p $PIDNS TYPE NPROCS PID USER COMMAND 4026531835 cgroup 540 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16 4026531837 user 540 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16 4026536117 uts 1 153168 1000800000 /bin/bash 4026536118 ipc 1 153168 1000800000 /bin/bash 4026536120 net 1 153168 1000800000 /bin/bash 4026537680 mnt 1 153168 1000800000 /bin/bash 4026537823 pid 1 153168 1000800000 /bin/bashYour namespace values might differ from the previous output.
Use the host debug pod to retrieve and compare the operating system (OS) and the GCC support library (
libgcc) package version of theubi9-commandcontainer and the host node.Retrieve the OS for the host node with the
cat /etc/redhat-releasecommand.sh-5.1#
cat /etc/redhat-releaseRed Hat Enterprise Linux CoreOS release 4.14Use the
crictl execcommand and the$CIDcontainer ID variable to retrieve the OS of theubi9-commandcontainer. Use the-itoptions to create an interactive terminal to execute thecat /etc/redhat-releasecommand.sh-5.1#
crictl exec -it $CID cat /etc/redhat-releaseRed Hat Enterprise Linux release 9.1 (Plow)The
ubi9-commandcontainer has a different OS from the host node.Use the
rpm -qi libgcccommand to retrieve thelibgccpackage version of the host node.sh-5.1$
rpm -qi libgccName: libgcc Version: 11.3.1 ...output omitted...Use the
crictl execcommand and the$CIDcontainer ID variable to retrieve theglibcpackage version of theubi9-commandcontainer. Use the-itoptions to create an interactive terminal to execute therpm -qi libgcccommand.sh-5.1#
crictl exec -it $CID rpm -qi libgccName: libgcc Version: 11.4.1 ...output omitted...The
ubi9-commandcontainer has a different version of thelibgccpackage from its host.
Exit the
master01-debugpod and theubi9-commandpod.Exit the
master01-debugpod. You must issue theexitcommand to end the host binary access. Execute theexitcommand again to exit and remove themaster01-debugpod.sh-5.1#
exitexitsh-4.4#
exitexit Removing debug pod ... Temporary namespace openshift-debug-bg7kn was removed.Return to the terminal window that is connected to the
ubi9-commandpod. Press Ctrl+C and then execute theexitcommand. Confirm that the pod is still running....output omitted...
^Cbash-5.1$exitexit Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command when the pod is running[student@workstation ~]$
oc get podsNAME READY STATUS RESTARTS AGE ubi9-command 1/1 Running 2 (6s ago) 35m