Guided Exercise: Monitor an OpenShift Cluster

Assess the overall status of an OpenShift cluster by using the web console, and identify projects and pods of core architectural components of Kubernetes and OpenShift.

Outcomes

  • Explore and show the monitoring features and components.

  • Explore the Overview page to inspect the cluster status.

  • Use a terminal connection to the master01 node to view the crio and kubelet services.

  • Explore the Monitoring page, alert rule configurations, and the etcd service dashboard.

  • Explore the events page, and filter events by resource name, type, and message.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command ensures that the cluster is prepared for the exercise.

[student@workstation ~]$ lab start intro-monitor

Instructions

  1. As the developer user, locate and then navigate to the Red Hat OpenShift web console.

    1. Use the terminal to log in to the OpenShift cluster as the developer user with the developer password.

      [student@workstation ~]$ oc login -u developer -p developer \
        https://api.ocp4.example.com:6443
      ...output omitted...
    2. Identify the URL for the OpenShift web console.

      [student@workstation ~]$ oc whoami --show-console
      https://console-openshift-console.apps.ocp4.example.com
    3. Open a web browser and navigate to https://console-openshift-console.apps.ocp4.example.com. Either type the URL in a web browser, or right-click and select Open Link from the terminal.

  2. Log in to the OpenShift web console as the admin user.

    1. Click Red Hat Identity Management and log in as the admin user with the redhatocp password.

  3. View the cluster health and overall status.

    1. Review the Cluster Overview page.

      If you do not see this page after a successful login, then locate the left panel from the OpenShift web console. If you do not see the left panel, then click the main menu icon at the upper left of the web console. Navigate to Home → Overview to view general cluster information.

      The Overview section contains links to helpful documentation and an initial cluster configuration walkthrough.

    2. Scroll down to view the Status section, which provides a summary of cluster performance and health.

      Many of the headings are links to sections with more detailed cluster information.

    3. Continue scrolling to view the Cluster utilization section, which contains metrics and graphs that show resource consumption.

    4. Continue scrolling to view the Details section, including information such as the cluster API address, cluster ID, and Red Hat OpenShift version.

    5. Scroll to the Cluster Inventory section, which contains links to the Nodes, Pods, StorageClasses, and PersistentVolumeClaim pages.

    6. The last part of the page contains the Activity section, which lists ongoing activities and recent events for the cluster.

  4. Use the OpenShift web console to access the terminal of a cluster node. From the terminal, determine the status of the kubelet node agent service and the CRI-O container runtime interface service.

    1. Navigate to Compute → Nodes to view the machine that provides the cluster resources.

      Note

      The classroom cluster runs on a single node named master01, which serves as the control and data planes for the cluster, and is intended for training purposes. A production cluster uses multiple nodes to ensure stability and to provide a highly available architecture.

    2. Click the master01 link to view the details of the cluster node.

    3. Click the Terminal tab to connect to a shell on the master01 node.

      With the interactive shell on this page, you can run commands directly on the cluster node.

    4. Run the chroot /host command to enable host binaries on the node.

    5. View the status of the kubelet node agent service by running the systemctl status kubelet command.

      Press q to exit the command and to return to the terminal prompt.

    6. View the status of the CRI-O container runtime interface service by running the systemctl status crio command.

      Press q to exit the command and to return to the terminal prompt.

  5. Inspect the cluster monitoring and alert rule configurations.

    1. From the OpenShift web console menu, navigate to Observe → Alerting to view cluster alert information.

    2. Select the Alerting rules tab to view the various alert definitions.

    3. Filter the alerting rules by name and search for the storageClasses term.

    4. Select the Warning alert that is labeled MultipleDefaultStorageClasses to view the details of the alerting rule. Inspect the Description and Expression definition for the rule.

  6. Inspect cluster metrics and execute an example query.

    1. Navigate to Observe → Metrics to open the cluster metrics utility.

    2. Click Insert example query to populate the metrics graph with sample data.

    3. From the graph, hover over any point on the timeline to view the detailed data points.

  7. View the cluster events log from the web console.

    1. Navigate to Home → Events to open the cluster events log.

      Note

      The event log updates every 15 minutes and can require additional time to populate entries.

    2. Scroll down to view a chronologically ordered stream that contains cluster events.

      Note

      Select an event to open the Details page of the related resource.

  8. Filter the events by resource name, type, or message.

    1. From the Resources drop-down, use the search bar to filter for the pod term, and select the box labeled Pod to display events that relate to that resource.

    2. Continue to refine the filter by selecting Normal from the types drop-down.

    3. Filter the results by using the Message text field. Enter the started container text to retrieve the matching events.

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish intro-monitor