Guided Exercise: Introduction to Linux Containers and Kubernetes Pods

·

16 min read

Run a base OS container in a pod and compare the environment inside the container with its host node.

Outcomes

  • Create a pod with a single container, and identify the pod and its container within the container engine of an OpenShift node.

  • View the logs of a running container.

  • Retrieve information inside a container, such as the operating system (OS) release and running processes.

  • Identify the process ID (PID) and namespaces for a container.

  • Identify the User ID (UID) and supplemental group ID (GID) ranges of a project.

  • Compare the namespaces of containers in one pod versus in another pod.

  • Inspect a pod with multiple containers, and identify the purpose of each container.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command ensures that all resources are available for this exercise.

[student@workstation ~]$ lab start pods-containers

Procedure 3.1. Instructions

  1. Log in to the OpenShift cluster and create the pods-containers project. Determine the UID and GID ranges for pods in the pods-containers project.

    1. Log in to the OpenShift cluster as the developer user with the oc command.

       [student@workstation ~]$ oc login -u developer -p developer \
         https://api.ocp4.example.com:6443
       Login successful
       ...output omitted...
      
    2. Create the pods-containers project.

       [student@workstation ~]$ oc new-project pods-containers
       Now using project "pods-containers" on server "https://api.ocp4.example.com:6443".
       ...output omitted...
      
    3. Identify the UID and GID ranges for pods in the pods-containers project.

       [student@workstation ~]$ oc describe project pods-containers
       Name:            pods-containers
       Created:        28 seconds ago
       Labels:            kubernetes.io/metadata.name=pods-containers
                   pod-security.kubernetes.io/audit=restricted
                   pod-security.kubernetes.io/audit-version=v1.24
                   pod-security.kubernetes.io/warn=restricted
                   pod-security.kubernetes.io/warn-version=v1.24
       Annotations:        openshift.io/description=
                   openshift.io/display-name=
                   openshift.io/requester=developer
                   openshift.io/sa.scc.mcs=s0:c28,c22
                   openshift.io/sa.scc.supplemental-groups=1000800000/10000
                   openshift.io/sa.scc.uid-range=1000800000/10000
       Display Name:        <none>
       Description:        <none>
       Status:            Active
       Node Selector:        <none>
       Quota:            <none>
       Resource limits:    <none>
      

      Your UID and GID range values might differ from the previous output.

  2. As the developer user, create a pod called ubi9-user from a UBI9 base container image. The image is available in the registry.ocp4.example.com:8443/ubi9/ubi container registry. Set the restart policy to Never and start an interactive session. Configure the pod to execute the whoami and id commands to determine the UIDs, supplemental groups, and GIDs of the container user in the pod. Delete the pod afterward.

    After the ubi-user pod is deleted, log in as the admin user and then re-create the ubi9-user pod. Retrieve the UIDs and GIDs of the container user. Compare the values to the values of the ubi9-user pod that the developer user created.

    Afterward, delete the ubi9-user pod.

    1. Use the oc run command to create the ubi9-user pod. Configure the pod to execute the whoami and id commands through an interactive bash shell session.

       [student@workstation ~]$ oc run -it ubi9-user --restart 'Never' \
         --image registry.ocp4.example.com:8443/ubi9/ubi \
         -- /bin/bash -c "whoami && id"
       1000800000
       uid=1000800000(1000800000) gid=0(root) groups=0(root),1000800000
      

      Your values might differ from the previous output.

      Notice that the user in the container has the same UID that is identified in the pods-containers project. However, the GID of the user in the container is 0, which means that the user belongs to the root group. Any files and directories that the container processes might write to must have read and write permissions by GID=0 and have the root group as the owner.

      Although the user in the container belongs to the root group, a UID value over 1000 means that the user is an unprivileged account. When a regular OpenShift user, such as the developer user, creates a pod, the containers within the pod run as unprivileged accounts.

    2. Delete the pod.

       [student@workstation ~]$ oc delete pod ubi9-user
       pod "ubi9-user" deleted
      
    3. Log in as the admin user with the redhatocp password.

       [student@workstation ~]$ oc login -u admin -p redhatocp
       Login successful.
      
       You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'
      
       Using project "pods-containers".
      
    4. Re-create the ubi9-user pod as the admin user. Configure the pod to execute the whoami and id commands through an interactive bash shell session. Compare the values of the UID and GID for the container user to the values of the ubi9-user pod that the developer user created.

      NOTE

      It is safe to ignore pod security warnings for exercises in this course. OpenShift uses the Security Context Constraints controller to provide safe defaults for pod security.

       [student@workstation ~]$  oc run -it ubi9-user --restart 'Never' \
         --image registry.ocp4.example.com:8443/ubi9/ubi \
         -- /bin/bash -c "whoami && id"
       Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "ubi9-user" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "ubi9-user" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "ubi9-user" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "ubi9-user" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
       root
       uid=0(root) gid=0(root) groups=0(root)
      

      Notice that the value of the UID is 0, which differs from the UID range value of the pod-containers project. The user in the container is the privileged account root user and belongs to the root group. When a cluster administrator creates a pod, the containers within the pod run as a privileged account by default.

    5. Delete the ubi9-user pod.

       [student@workstation ~]$ oc delete pod ubi9-user
       pod "ubi9-user" deleted
      
  3. As the developer user, use the oc run command to create a ubi9-user pod from a UBI9 base container image. The image is available in the registry.ocp4.example.com:8443/ubi9/ubi container registry. Set the restart policy to Never, and configure the pod to execute the date command. Retrieve the logs of the ubi9-date pod to confirm that the date command executed. Delete the pod afterward.

    1. Log in as the developer user with the developer password.

       [student@workstation ~]$ oc login -u developer -p developer
       Login successful.
      
       You have one project on this server: "pods-containers"
      
       Using project "pods-containers".
      
    2. Create a pod called ubi9-date that executes the date command.

       [student@workstation ~]$ oc run ubi9-date --restart 'Never' \
         --image registry.ocp4.example.com:8443/ubi9/ubi -- date
       pod/ubi9-date created
      
    3. Wait a few moments for the creation of the pod. Then, retrieve the logs of the ubi9-date pod.

       [student@workstation ~]$ oc logs ubi9-date
       Mon Nov 28 15:02:55 UTC 2022
      
    4. Delete the ubi9-date pod.

       [student@workstation ~]$ oc delete pod ubi9-date
       pod "ubi9-date" deleted
      
  4. Use the oc run command to create a ubi9-command pod with the registry.ocp4.example.com:8443/ubi9/ubi container image. Start an interactive shell to access the container. Use the shell to execute commands from within the container.

    1. Create a pod called ubi9-command and start an interactive shell.

       [student@workstation ~]$ oc run ubi9-command -it \
         --image registry.ocp4.example.com:8443/ubi9/ubi -- /bin/bash
       If you don't see a command prompt, try pressing enter.
       bash-5.1$
      
    2. Execute the date command.

       bash-5.1$ date
       Mon Nov 28 15:05:47 UTC 2022
      
    3. Exit the shell session.

       bash-5.1$ exit
       exit
       Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command when the pod is running
      
  5. View the logs for the ubi9-command pod with the oc logs command. Then, connect to the ubi9-command pod and issue the following command:

     while true; do echo $(date); sleep 2; done
    

    This command executes the date and sleep commands to generate output to the console every two seconds. Retrieve the logs of the ubi9 pod again to confirm that the logs display the executed command.

    1. Use the oc logs command to view the logs of the ubi9-command pod.

       [student@workstation ~]$ oc logs ubi9-command
       bash-5.1$ [student@workstation ~]$
      

      The pod's command prompt is returned. The oc logs command displays the pod's current stdout and stderr output in the console. Because you disconnected from the interactive session, the pod's current stdout is the command prompt, and not the commands that you executed previously.

    2. Use the oc attach command to connect to the ubi9-command pod again. In the shell, execute the while true; do echo $(date); sleep 2; done command to continuously generate stdout output.

       [student@workstation ~]$ oc attach ubi9-command -it
       If you don't see a command prompt, try pressing enter.
      
       bash-5.1$ while true; do echo $(date); sleep 2; done
       Mon Nov 28 15:15:16 UTC 2022
       Mon Nov 28 15:15:18 UTC 2022
       Mon Nov 28 15:15:20 UTC 2022
       Mon Nov 28 15:15:22 UTC 2022
       ...output omitted...
      
    3. Open another terminal window and view the logs for the ubi9-command pod with the oc logs command. Limit the log output to the last 10 entries with the --tail option. Confirm that the logs display the results of the command that you executed in the container.

       [student@workstation ~]$ oc logs ubi9-command --tail=10
       Mon Nov 28 15:15:16 UTC 2022
       Mon Nov 28 15:15:18 UTC 2022
       Mon Nov 28 15:15:20 UTC 2022
       Mon Nov 28 15:15:22 UTC 2022
       Mon Nov 28 15:15:24 UTC 2022
       Mon Nov 28 15:15:26 UTC 2022
       Mon Nov 28 15:15:28 UTC 2022
       Mon Nov 28 15:15:30 UTC 2022
       Mon Nov 28 15:15:32 UTC 2022
       Mon Nov 28 15:15:34 UTC 2022
      
  6. Identify the name for the container in the ubi9-command pod. Identify the process ID (PID) for the container in the ubi9-command pod by using a debug pod for the pod's host node. Use the crictl command to identify the PID of the container in the ubi9-command pod. Then, retrieve the PID of the container in the debug pod.

    1. Identify the container name in the ubi9-command pod with the oc get command. Specify the JSON format for the command output. Parse the JSON output with the jq command to retrieve the value of the .status.containerStatuses[].name object.

       [student@workstation ~]$ oc get pod ubi9-command -o json | \
         jq .status.containerStatuses[].name
       "ubi9-command"
      

      The ubi9-command pod has a single container of the same name.

    2. Find the host node for the ubi9-command pod. Start a debug pod for the host with the oc debug command.

       [student@workstation ~]$ oc get pods ubi9-command -o wide
       NAME           READY STATUS  RESTARTS    AGE  IP         NODE     NOMINATED NODE READINESS GATES
       ubi9-command   1/1   Running 2 (16m ago) 27m  10.8.0.26  master01 <none>         <none>
      
       [student@workstation ~]$ oc debug node/master01
       Error from server (Forbidden): nodes "master01" is forbidden: User "developer" cannot get resource "nodes" in API group "" at the cluster scope
      

      The debug pod fails because the developer user does not have the required permission to debug a host node.

    3. Log in as the admin user with the redhatocp password. Start a debug pod for the host with the oc debug command. After connecting to the debug pod, run the chroot /host command to use host binaries, such as the crictl command-line tool.

       [student@workstation ~]$ oc login -u admin -p redhatocp
       Login successful.
       ...output omitted...
      
       [student@workstation ~]$ oc debug node/master01
       Starting pod/master01-debug ...
       To use host binaries, run `chroot /host`
       Pod IP: 192.168.50.10
       If you don't see a command prompt, try pressing enter
      
       sh-4.4# chroot /host
      
    4. Use the crictl ps command to retrieve the ubi9-command container ID. Specify the ubi9-command container with the --name option and use the JSON output format. Parse the JSON output with the jq -r command to get the RAW JSON output. Export the container ID as the $CID environment variable.

      NOTE

      When using jq without the -r flag, the container ID is wrapped in double quotes, which does not work with crictl commands. If the -r flag is not used, then you can add | tr -d '"' to the end of the command to trim the double quotes.

       sh-4.4# crictl ps --name ubi9-command -o json | jq -r .containers[0].id
       81adbc6222d79ed9ba195af4e9d36309c18bb71bc04b2e8b5612be632220e0d6
      
       sh-4.4# CID=$(crictl ps --name ubi9-command -o json | jq -r .containers[0].id)
      
       sh-4.4# echo $CID
       81adbc6222d79ed9ba195af4e9d36309c18bb71bc04b2e8b5612be632220e0d6
      

      Your container ID value might differ from the previous output.

    5. The crictl ps command works with container IDs, container names, and pod IDs, but not with pod names. Execute the crictl pods command to retrieve the pod ID of the master01-debug pod. Next, use the crictl ps command and the pod ID to retrieve the master01-debug pod container name. Then, use the crictl ps command and the container name to retrieve the container ID. Save the debug container ID as the $DCID environment variable.

       sh-4.4# crictl pods --name master01-debug
       POD ID        CREATED        STATE  NAME           NAMESPACE                ATTEMPT RUNTIME
       cb066ee76b598 34 minutes ago Ready  master01-debug openshift-debug-bh7kn 0       (default)
      
       sh-4.4# crictl ps -p cb066ee76b598 -o json | jq -r .containers[0].metadata.name
       container-00
      
       sh-4.4# crictl ps --name container-00 -o json | jq -r .containers[0].id
       094f93339adc7d4053ede708c78be4dc155959ea78ebabe9573365b04cfa12f2
      
       sh-4.4# DCID=$(crictl ps --name container-00 -o json | jq -r .containers[0].id)
      

      Your pod ID and container ID values might differ from the previous output.

    6. Use the crictl inspect command to find the PID of the ubi9-command container and the container-00 container. The PID value is in the .info.pid object in the crictl inspect output. Export the ubi9-command container PID as the $PID environment variable. Export the container-00 container PID as the $DPID environment variable.

       sh-4.4# crictl inspect $CID | grep pid
           "pid": 365297,
                 "pids": {
                   "type": "pid"
       ...output omitted...
                 }
       ...output omitted...
      
       sh-4.4# PID=365297
      
       sh-4.4# crictl inspect -o json $DCID | jq .info.pid
       151115
      
       sh-4.4# DPID=151115
      

      Your PID values might differ from the previous output.

  7. Use the lsns command to list the system namespaces of the ubi9-command container and the container-00 container. Confirm that the running processes in the containers are isolated to different system namespaces.

    1. View the system namespaces of the ubi9-command container with the lsns command. Specify the PID with the -p option and use the $PID environment variable. In the resulting table, the NS column contains the namespace values for the container.

       sh-4.4# lsns -p $PID
               NS TYPE   NPROCS    PID USER       COMMAND
       4026531835 cgroup    540      1 root       /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026531837 user      540      1 root       /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026536117 uts         1 153168 1000800000 /bin/bash
       4026536118 ipc         1 153168 1000800000 /bin/bash
       4026536120 net         1 153168 1000800000 /bin/bash
       4026537680 mnt         1 153168 1000800000 /bin/bash
       4026537823 pid         1 153168 1000800000 /bin/bash
      

      Your namespace values might differ from the previous output.

    2. View the system namespaces of the debug pod container with the lsns command. Specify the PID with the -p option and use the $DPID environment variable. Compare the namespace values of the debug pod container versus the ubi9 container.

       sh-4.4# lsns -p $DPID
               NS TYPE   NPROCS    PID USER COMMAND
       4026531835 cgroup    540      1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026531836 pid       373      1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026531837 user      540      1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026537928 ipc       339       1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026531992 net       430      1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 16
       4026537824 uts         3 151115 root /bin/sh
       4026537929 mnt         3 151115 root /bin/sh
      

      Your namespace values might differ from the previous output.

      Notice that the namespaces for the ubi9-command container and the container-00 container PIDs are unique for each container. Namespaces provide process isolation for containers. Also, the cgroup and user values for both containers are the same, because both pods are running on the same host.

  8. Use the host debug pod to retrieve and compare the operating system (OS) and the GNU C Library (glibc) package version of the ubi9-command container and the host node.

    1. Retrieve the OS for the host node with the cat /etc/redhat-release command.

       sh-4.4# cat /etc/redhat-release
       Red Hat Enterprise Linux CoreOS release 4.12
      
    2. Use the crictl exec command and the $CID container ID variable to retrieve the OS of the ubi9-command container. Use the -it options to create an interactive terminal to execute the cat /etc/redhat-release command.

       sh-4.4# crictl exec -it $CID cat /etc/redhat-release
       Red Hat Enterprise Linux release 9.1 (Plow)
      

      The ubi9-command container has a different OS from the host node.

    3. Use the ldd --version command to retrieve the glibc package version of the host node.

       sh-4.4$ ldd --version
       ldd (GNU libc) 2.28
       Copyright (C) 2018 Free Software Foundation, Inc.
       ...output omitted...
      
    4. Use the crictl exec command and the $CID container ID variable to retrieve the glibc package version of the ubi9-command container. Use the -it options to create an interactive terminal to execute the ldd --version command.

       sh-4.4# crictl exec -it $CID ldd --version
       ldd (GNU libc) 2.34
       Copyright (C) 2021 Free Software Foundation, Inc.
       ...output omitted...
      

      The ubi9-command container has a different version of the glibc package from its host.

  9. Use the crictl pods command to view details about the pod in the openshift-dns-operator namespace. Next use the crictl ps command to retrieve the list of containers in the pod. Then, use the crictl inspect command to find the PID of a container in the pod. Finally, use the lsns and crictl exec commands to view the running processes and their namespaces in the container.

    1. Identify any pods in the openshift-dns-operator namespace.

       sh-4.4# crictl pods --namespace openshift-dns-operator
       POD ID         CREATED      STATE  NAME             NAMESPACE              ATTEMPT ...
       64765ebd09281  6 hours ago  Ready  dns-operator-... openshift-dns-operator 0       ...
      

      One dns-operator pod exists in the openshift-dns-operator namespace. Your pod ID value might differ from the previous output.

    2. Use crictl ps command and the pod ID to view a list of containers in the dns-operator pod.

       sh-4.4# crictl ps -p 64765ebd09281
       CONTAINER     IMAGE       CREATED     STATE   NAME            ATTEMPT POD ID      POD
       d3518d8fe99d1 ce6e...e93b 6 hours ago Running kube-rbac-proxy 8       6476...9281 dns-operator-...
       868339f8eb510 b85a...c79a 6 hours ago Running dns-operator    8       6476...9281 dns-operator-...
      

      Two containers are running in the dns-operator pod: the kube-rbac-proxy container and the dns-operator container. Note the ID for each container. Your container ID values might differ from the previous output.

    3. Use the crictl inspect command and the container ID to retrieve the PID of the dns-operator container.

       sh-4.4# crictl inspect -o json 868339f8eb510 | jq .info.pid
       14577
      

      Your PID value might differ from the previous output.

    4. Use the lsns and crictl exec commands to view the running processes and their namespaces in the dns-operator container.

       sh-4.4# lsns -p 14577
               NS TYPE   NPROCS   PID USER      COMMAND
       4026531835 cgroup    530     1 root      /usr/lib/systemd/systemd --switched-root --system --deserialize 17
       4026531837 user      530     1 root      /usr/lib/systemd/systemd --switched-root --system --deserialize 17
       4026532604 uts         2 14577 nfsnobody dns-operator
       4026532605 ipc         2 14577 nfsnobody dns-operator
       4026532607 net         2 14577 nfsnobody dns-operator
       4026537710 mnt         1 14577 nfsnobody dns-operator
       4026537711 pid         1 14577 nfsnobody dns-operator
      
       sh-4.4# crictl exec -it 868339f8eb510 ps -ef
       UID          PID    PPID  C STIME TTY    TIME     CMD
       nobody         1       0  0 14:24 ?      00:00:11 dns-operator
       nobody        20       0  0 20:17 pts/0  00:00:00 ps -ef
      

      Your namespace values might differ from the previous output.

      The running processes of the dns-operator container are the dns-operator and ps -ef commands.

    5. Use the crictl inspect command and container ID to retrieve the PID of the kube-rbac-proxy container.

       sh-4.4# crictl pods --namespace openshift-dns-operator
       POD ID        CREATED     STATE  NAME             NAMESPACE              ATTEMPT  RUNTIME
       64765ebd09281 6 hours ago Ready  dns-operator-... openshift-dns-operator 0        (default)
      
       sh-4.4# crictl ps -p 64765ebd09281
       CONTAINER     IMAGE       CREATED     STATE   NAME            ATTEMPT POD ID      POD
       d3518d8fe99d1 ce6e...e93b 6 hours ago Running kube-rbac-proxy 8       6476...9281  dns-operator-...
       868339f8eb510 b85a...c79a 6 hours ago Running dns-operator    8       6476...9281 dns-operator-...
      
       sh-4.4# crictl inspect -o json d3518d8fe99d1 | jq .info.pid
       16408
      

      Your pod ID, container ID, and PID values might differ from the previous output.

    6. Use the lsns and crictl exec commands to view the running processes and their namespaces in the kube-rbac-proxy container.

       sh-4.4# lsns -p 16408
               NS TYPE   NPROCS   PID USER      COMMAND
       4026531835 cgroup    530     1 root      /usr/lib/systemd/systemd --switched-root --system --deserialize 17
       4026531837 user      530     1 root      /usr/lib/systemd/systemd --switched-root --system --deserialize 17
       4026532604 uts         2 14577 nfsnobody dns-operator
       4026532605 ipc         2 14577 nfsnobody dns-operator
       4026532607 net         2 14577 nfsnobody dns-operator
       4026537756 mnt         1 16408 nfsnobody /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=...
       4026537770 pid         1 16408 nfsnobody /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=...
      
       sh-4.4# crictl exec -it d3518d8fe99d1 ps -ef
       UID          PID    PPID  C STIME TTY    TIME     CMD
       nobody         1       0  0 14:24 ?      00:00:02 /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-..._WITH
       nobody        21       0  0 20:19 pts/0  00:00:00 ps -ef
      

      Your namespace values might differ from the previous output.

      Because that the kube-rbac-proxy container and the dns-operator container are in the same pod, the containers share some namespaces. However, the unique running processes of each container are in different namespaces.

  10. Exit the master01-debug pod and the ubi9-command pod.

    1. Exit the master01-debug pod. You must issue the exit command to end the host binary access. Execute the exit command again to exit and remove the master01-debug pod.

       sh-4.4# exit
       exit
      
       sh-4.4# exit
       exit
      
       Removing debug pod ...
       Temporary namespace openshift-debug-bg7kn was removed.
      
    2. Return to the terminal window that is connected to the ubi9-command pod. Press Ctrl+C and then execute the exit command. Confirm that the pod is still running.

       ...output omitted...
       ^C
       bash-5.1$ exit
       exit
       Session ended, resume using 'oc attach ubi9-command -c ubi9-command -i -t' command when the pod is running
      
       [student@workstation ~]$ oc get pods
       NAME           READY   STATUS    RESTARTS     AGE
       ubi9-command   1/1     Running   2 (6s ago)   35m
      

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish pods-containers