- Create Linux Containers and Kubernetes Pods
- Guided Exercise:
Create Linux Containers and Kubernetes Pods
- Find and Inspect Container Images
- Guided Exercise:
Find and Inspect Container Images
- Troubleshoot Containers and Pods
- Guided Exercise:
Troubleshoot Containers and Pods
- Lab: Run Applications as Containers and Pods
- Quiz: Run Applications as Containers and Pods
- Summary
- Lab: Run Applications as Containers and Pods
Abstract
| Goal | |
| Objectives |
|
| Sections |
|
| Lab |
|
Run containers inside pods and identify the host OS processes and namespaces that the containers use.
Kubernetes and OpenShift offer many ways to create containers in pods.
You can use one such way, the run command, with the kubectl or oc CLI to create and deploy an application in a pod from a container image.
A container image contains immutable data that defines an application and its libraries.
Note
Container images are discussed in more detail elsewhere in the course.
The run command uses the following syntax:
oc run RESOURCE/NAME --image IMAGE [options]For example, the following command deploys an Apache HTTPD application in a pod named web-server that uses the registry.access.redhat.com/ubi8/httpd-24 container image.
[user@host ~]$ kubectl run web-server --image registry.access.redhat.com/ubi8/httpd-24You can use several options and flags with the run command.
The --command option executes a custom command and its arguments in a container, rather than the default command that is defined in the container image.
You must follow the --command option with a double dash (--) to separate the custom command and its arguments from the run command options.
The following syntax is used with the --command option:
oc run RESOURCE/NAME --image IMAGE --command -- cmd arg1 ... argNYou can also use the double dash option to provide custom arguments to a default command in the container image.
kubectl run RESOURCE/NAME --image IMAGE -- arg1 arg2 ... argNTo start an interactive session with a container in a pod, include the -it options before the pod name.
The -i option tells Kubernetes to keep open the standard input (stdin) on the container in the pod.
The -t option tells Kubernetes to open a TTY session for the container in the pod.
You can use the -it options to start an interactive, remote shell in a container.
From the remote shell, you can then execute additional commands in the container.
The following example starts an interactive remote shell, /bin/bash, in the default container in the my-app pod.
[user@host ~]$ oc run -it my-app --image registry.access.redhat.com/ubi9/ubi \
--command -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-5.1$Note
Unless you include the --namespace or -n options, the run command creates containers in pods in the current selected project.
You can also define a restart policy for containers in a pod by including the --restart option.
A pod restart policy determines how the cluster should respond when containers in that pod exit.
The --restart option has the following accepted values: Always, OnFailure, and Never.
-
Always If the restart policy is set to
Always, then the cluster continuously tries to restart a successfully exited container, for up to five minutes. The default pod restart policy isAlways. If the--restartoption is omitted, then the pod is configured with theAlwayspolicy.-
OnFailure Setting the pod restart policy to
OnFailuretells the cluster to restart only failed containers in the pod, for up to five minutes.-
Never If the restart policy is set to
Never, then the cluster does not try to restart exited or failed containers in a pod. Instead, the pods immediately fail and exit.
The following example command executes the date command in the container of the pod named my-app, redirects the date command output to the terminal, and defines Never as the pod restart policy.
[user@host ~]$ oc run -it my-app \
--image registry.access.redhat.com/ubi9/ubi \
--restart Never --command -- date
Mon Feb 20 22:36:55 UTC 2023To automatically delete a pod after it exits, include the --rm option with the run command.
[user@host ~]$ kubectl run -it my-app --rm \
--image registry.access.redhat.com/ubi9/ubi \
--restart Never --command -- date
Mon Feb 20 22:38:50 UTC 2023
pod "date" deletedFor some containerized applications, you might need to specify environment variables for the application to work.
To specify an environment variable and its value, include the --env= option with the run command.
[user@host ~]$ oc run mysql \
--image registry.redhat.io/rhel9/mysql-80 \
--env MYSQL_ROOT_PASSWORD=myP@$$123
pod/mysql createdWhen a project is created, OpenShift adds annotations to the project that determine the user ID (UID) range and supplemental group ID (GID) for pods and their containers in the project.
You can retrieve the annotations with the oc describe project command.project-name
[user@host ~]$oc describe project my-appName: my-app ...output omitted... Annotations: openshift.io/description= openshift.io/display-name= openshift.io/requester=developer openshift.io/sa.scc.mcs=s0:c27,c4openshift.io/sa.scc.supplemental-groups=1000710000/10000openshift.io/sa.scc.uid-range=1000710000/10000...output omitted...
With OpenShift default security policies, regular cluster users cannot choose the USER or UIDs for their containers.
When a regular cluster user creates a pod, OpenShift ignores the USER instruction in the container image.
Instead, OpenShift assigns to the user in the container a UID and a supplemental GID from the identified range in the project annotations.
The GID of the user is always 0, which means that the user belongs to the root group.
Any files and directories that the container processes might write to must have read and write permissions by GID=0 and have the root group as the owner.
Although the user in the container belongs to the root group, the user is an unprivileged account.
In contrast, when a cluster administrator creates a pod, the USER instruction in the container image is processed.
For example, if the USER instruction for the container image is set to 0, then the user in the container is the root privileged account, with a 0 value for the UID.
Executing a container as a privileged account is a security risk.
A privileged account in a container has unrestricted access to the container's host system.
Unrestricted access means that the container could modify or delete system files, install software, or otherwise compromise its host.
Red Hat therefore recommends that you run containers as rootless, or as an unprivileged user with only the necessary privileges for the container to run.
Red Hat also recommends that you run containers from different applications with unique user IDs. Running containers from different applications with the same UID, even an unprivileged one, is a security risk. If the UID for two containers is the same, then the processes in one container could access the resources and files of the other container. By assigning a distinct range of UIDs and GIDs for each project, OpenShift ensures that applications in different projects do not run as the same UID or GID.
The Kubernetes Pod Security Admission controller issues a warning when a pod is created without a defined security context. Security contexts grant or deny OS-level privileges to pods. OpenShift uses the Security Context Constraints controller to provide safe defaults for pod security. You can ignore pod security warnings in these course exercises. Security Context Constraints (SCC) are discussed in more detail in course DO280: Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster.
To execute a command in a running container in a pod, you can use the exec command with the kubectl or oc CLI.
The exec command uses the following syntax:
oc exec RESOURCE/NAME -- COMMAND [args...] [options]The output of the executed command is sent to your terminal.
In the following example, the exec command executes the date command in the my-app pod.
[user@host ~]$ oc exec my-app -- date
Tue Feb 21 20:43:53 UTC 2023The specified command is executed in the first container of a pod.
For multicontainer pods, include the -c or --container= options to specify which container is used to execute the command.
The following example executes the date command in a container named ruby-container in the my-app pod.
[user@host ~]$ kubectl exec my-app -c ruby-container -- date
Tue Feb 21 20:46:50 UTC 2023The exec command also accepts the -i and -t options to create an interactive session with a container in a pod.
In the following example, Kubernetes sends stdin to the bash shell in the ruby-container container from the my-app pod, and sends stdout and stderr from the bash shell back to the terminal.
[user@host ~]$ oc exec my-app -c ruby-container -it -- bash -il
[1000780000@ruby-container /]$In the previous example, a raw terminal is opened in the ruby-container container.
From this interactive session, you can execute additional commands in the container.
To terminate the interactive session, you must execute the exit command in the raw terminal.
[user@host ~]$kubectl exec my-app -c ruby-container -it -- bash -il[1000780000@ruby-container /]$ date Tue Feb 21 21:16:00 UTC 2023 [1000780000@ruby-container]exit
Container logs are the standard output (stdout) and standard error (stderr) output of a container.
You can retrieve logs with the logs pod command that the pod-namekubectl and oc CLIs provide.
The command includes the following options:
-
-lor--selector='' Filter objects based on the specified
key:valuelabel constraint.-
--tail= Specify the number of lines of recent log files to display; the default value is
-1with no selectors, which displays all log lines.-
-cor--container= Print the logs of a particular container in a multicontainer pod.
-
-for--follow Follow, or stream, logs for a container.
-
-por--previous=true Print the logs of a previous container instance in the pod, if it exists. This option is helpful for troubleshooting a pod that failed to start, because it prints the logs of the last attempt.
The following example restricts oc logs command output to the 10 most recent log files:
[user@host ~]$ oc logs postgresql-1-jw89j --tail=10
done
server stopped
Starting server...
2023-01-04 22:00:16.945 UTC [1] LOG: starting PostgreSQL 12.11 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit
2023-01-04 22:00:16.946 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-01-04 22:00:16.946 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-01-04 22:00:16.953 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-01-04 22:00:16.960 UTC [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-01-04 22:00:16.968 UTC [1] LOG: redirecting log output to logging collector process
2023-01-04 22:00:16.968 UTC [1] HINT: Future log output will appear in directory "log".You can also use the attach command to connect to and start an interactive session on a running container in a pod.
The pod-name -c container-name -it-c option is required for multicontainer pods.
If the container name is omitted, then Kubernetes uses the container-namekubectl.kubernetes.io/default-container annotation on the pod to select the container.
Otherwise, the first container in the pod is chosen.
You can use the interactive session to retrieve application log files and to troubleshoot application issues.
[user@host ~]$ oc attach my-app -it
If you don't see a command prompt, try pressing enter.
bash-4.4$You can also retrieve logs from the web console by clicking the tab of any pod.
If you have more than one container, then you can change between them to list the logs of each one.
You can delete Kubernetes resources, such as pod resources, with the delete command.
The delete command can delete resources by resource type and name, resource type and label, standard input (stdin), and with JSON- or YAML-formatted files.
The command accepts only one argument type at a time.
For example, you can supply the resource type and name as a command argument.
[user@host ~]$ oc delete pod php-appYou can also delete pods from the web console by clicking and then in the pod's principal menu.
To select resources based on labels, you can include the -l option and the key:value label as a command argument.
[user@host ~]$ kubectl delete pod -l app=my-app
pod "php-app" deleted
pod "mysql-db" deletedYou can also provide the resource type and a JSON- or YAML-formatted file that specifies the name of the resource.
To use a file, you must include the -f option and provide the full path to the JSON- or YAML-formatted file.
[user@host ~]$ oc delete pod -f ~/php-app.json
pod "php-app" deletedYou can also use stdin and a JSON- or YAML-formatted file that includes the resource type and resource name with the delete command.
[user@host ~]$ cat ~/php-app.json | kubectl delete -f -
pod "php-app" deletedPods support graceful termination, which means that pods try to terminate their processes first before Kubernetes forcibly terminates the pods.
To change the time period before a pod is forcibly terminated, you can include the --grace-period flag and a time period in seconds in your delete command.
For example, to change the grace period to 10 seconds, use the following command:
[user@host ~]$ oc delete pod php-app --grace-period=10To shut down the pod immediately, set the grace period to 1 second.
You can also use the --now flag to set the grace period to 1 second.
[user@host ~]$ oc delete pod php-app --nowYou can also forcibly delete a pod with the --force option.
If you forcibly delete a pod, Kubernetes does not wait for a confirmation that the pod's processes ended, which can leave the pod's processes running until its node detects the deletion.
Therefore, forcibly deleting a pod could result in inconsistency or data loss.
Forcibly delete pods only if you are sure that the pod's processes are terminated.
[user@host ~]$ kubectl delete pod php-app --forceTo delete all pods in a project, you can include the --all option.
[user@host ~]$ kubectl delete pods --all
pod "php-app" deleted
pod "mysql-db" deletedLikewise, you can delete a project and its resources with the oc delete project command.project-name
[user@host ~]$ oc delete project my-app
project.project.openshift.io "my-app" deletedA container engine is required to run containers.
Worker and control plane nodes in an OpenShift Container Platform cluster use the CRI-O container engine to run containers.
Unlike tools such as Podman or Docker, the CRI-O container engine is a runtime that is designed and optimized specifically for running containers in a Kubernetes cluster.
Because CRI-O meets the Kubernetes Container Runtime Interface (CRI) standards, the container engine can integrate with other Kubernetes and OpenShift tools, such as networking and storage plug-ins.
Note
For more information about the Kubernetes Container Runtime Interface (CRI) standards, refer to the CRI-API repository at https://github.com/kubernetes/cri-api.
CRI-O provides a command-line interface to manage containers with the crictl command.
The crictl command includes several subcommands to help you to manage containers.
The following subcommands are commonly used with the crictl command:
-
crictl pods Lists all pods on a node.
-
crictl image Lists all images on a node.
-
crictl inspect Retrieve the status of one or more containers.
-
crictl exec Run a command in a running container.
-
crictl logs Retrieve the logs of a container.
-
crictl ps List running containers on a node.
To manage containers with the crictl command, you must first identify the node that is hosting your containers.
[user@host ~]$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE postgresql-1-8lzf2 1/1 Running 0 20m 10.8.0.64master01postgresql-1-deploy 0/1 Completed 0 21m 10.8.0.63 master01
[user@host ~]$ oc get pod postgresql-1-8lzf2 -o jsonpath='{.spec.nodeName}{"\n"}'
master01Next, you must connect to the identified node as a cluster administrator. Cluster administrators can use SSH to connect to a node or create a debug pod for the node. Regular users cannot connect to or create debug pods for cluster nodes.
As a cluster administrator, you can create a debug pod for a node with the oc debug node/ command.
OpenShift creates the node-namepod/ pod in your currently selected project and automatically connects you to the pod.
You must then enable access host binaries, such as the node-name-debugcrictl command, with the chroot /host command.
This command mounts the host's root file system in the /host directory within the debug pod shell.
By changing the root directory to the /host directory, you can run binaries contained in the host's executable path.
[user@host ~]$oc debug node/master01Starting pod/master01-debug ... To use host binaries, runchroot /hostPod IP: 192.168.50.10 If you don't see a command prompt, try pressing enter. sh-4.4#chroot /host
After enabling host binaries, you can use the crictl command to manage the containers on the node.
For example, you can use the crictl ps and crictl inspect commands to retrieve the process ID (PID) of a running container.
You can then use the PID to retrieve or enter the namespaces within a container, which is useful for troubleshooting application issues.
To find the PID of a running container, you must first determine the container's ID.
You can use the crictl ps command with the --name option to filter the command output to a specific container.
sh-5.1# crictl ps --name postgresql
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
27943ae4f3024 image...7104 5...ago Running postgresql 0 5768...f015 postgresql-1...The default output of the crictl ps command is a table.
You can find the short container ID under the CONTAINER column.
You can also use the -o or --output options to specify the format of the crictl ps command as JSON or YAML and then parse the output.
The parsed output displays the full container ID.
sh-5.1# crictl ps --name postgresql -o json | jq .containers[0].id
"2794...29a4"After identifying the container ID, you can use the crictl inspect command and the container ID to retrieve the PID of the running container.
By default, the crictl inspect command displays verbose output.
You can use the -o or --output options to format the command output as JSON, YAML, a table, or as a Go template.
If you specify the JSON format, you can then parse the output with the jq command.
Likewise, you can use the grep command to limit the command output.
sh-5.1#crictl inspect -o json 27943ae4f3024 | jq .info.pid43453sh-5.1#crictl inspect 27943ae4f3024 | grep pid"pid":43453, ...output omitted...
After determining the PID of a running container, you can use the lsns -p command to list the system namespaces of a container.PID
sh-5.1# lsns -p 43453
NS TYPE NPROCS PID USER COMMAND
4026531835 cgroup 530 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 17
4026531837 user 530 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 17
4026537853 uts 8 43453 1000690000 postgres
4026537854 ipc 8 43453 1000690000 postgres
4026537856 net 8 43453 1000690000 postgres
4026538013 mnt 8 43453 1000690000 postgres
4026538014 pid 8 43453 1000690000 postgresYou can also use the PID of a running container with the nsenter command to enter a specific namespace of a running container.
For example, you can use the nsenter command to execute a command within a specified namespace on a running container.
The following example executes the ps -ef command within the process namespace of a running container.
sh-5.1# nsenter -t 43453 -p -r ps -ef
UID PID PPID C STIME TTY TIME CMD
1000690+ 1 0 0 18:49 ? 00:00:00 postgres
1000690+ 58 1 0 18:49 ? 00:00:00 postgres: logger
1000690+ 60 1 0 18:49 ? 00:00:00 postgres: checkpointer
1000690+ 61 1 0 18:49 ? 00:00:00 postgres: background writer
1000690+ 62 1 0 18:49 ? 00:00:00 postgres: walwriter
1000690+ 63 1 0 18:49 ? 00:00:00 postgres: autovacuum launcher
1000690+ 64 1 0 18:49 ? 00:00:00 postgres: stats collector
1000690+ 65 1 0 18:49 ? 00:00:00 postgres: logical replication launcher
root 7414 0 0 20:14 ? 00:00:00 ps -efThe -t option specifies the PID of the running container as the target PID for the nsenter command.
The -p option directs the nsenter command to enter the process or pid namespace.
The -r option sets the top-level directory of the process namespace as the root directory, thus enabling commands to execute in the context of the namespace.
You can also use the -a option to execute a command in all of the container's namespaces.
sh-5.1# nsenter -t 43453 -a ps -ef
UID PID PPID C STIME TTY TIME CMD
1000690+ 1 0 0 18:49 ? 00:00:00 postgres
1000690+ 58 1 0 18:49 ? 00:00:00 postgres: logger
1000690+ 60 1 0 18:49 ? 00:00:00 postgres: checkpointer
1000690+ 61 1 0 18:49 ? 00:00:00 postgres: background writer
1000690+ 62 1 0 18:49 ? 00:00:00 postgres: walwriter
1000690+ 63 1 0 18:49 ? 00:00:00 postgres: autovacuum launcher
1000690+ 64 1 0 18:49 ? 00:00:00 postgres: stats collector
1000690+ 65 1 0 18:49 ? 00:00:00 postgres: logical replication launcher
root 10058 0 0 20:45 ? 00:00:00 ps -efReferences
Container Runtime Interface (CRI) CLI
For more information about resource log files, refer to the Viewing Logs for a Resource chapter in the Red Hat OpenShift Container Platform 4.14 Logging documentation at https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/logging/index