Guided Exercise: Find and Inspect Container Images
Use a supported MySQL container image to run server and client pods in Kubernetes; also test two community images and compare their runtime requirements.
Outcomes
Locate and run container images from a container registry.
Inspect remote container images and container logs.
Set environment variables and override entry points for a container.
Access files and directories within a container.
As the student
user on the workstation
machine, use the lab
command to prepare your system for this exercise.
This command ensures that all resources are available for this exercise.
[student@workstation ~]$ lab start pods-images
Procedure 3.2. Instructions
Log in to the OpenShift cluster and create the
pods-images
project.Log in to the OpenShift cluster as the
developer
user with theoc
command.[student@workstation ~]$ oc login -u developer -p developer \ https://api.ocp4.example.com:6443 ...output omitted...
Create the
pods-images
project.[student@workstation ~]$ oc new-project pods-images ...output omitted...
Authenticate to
registry.ocp4.example.com:8443
, which is the classroom container registry. This private registry hosts certain copies and tags of community images from Docker and Bitnami, as well as some supported images from Red Hat. Useskopeo
to log in as thedeveloper
user, and then retrieve a list of available tags for theregistry.ocp4.example.com:8443/redhattraining/docker-nginx
container repository.Use the
skopeo login
command to log in as thedeveloper
user with thedeveloper
password.[student@workstation ~]$ skopeo login registry.ocp4.example.com:8443 Username: developer Password: developer Login Succeeded!
The classroom registry contains a copy and specific tags of the
docker.io/library/nginx
container repository. Use theskopeo list-tags
command to retrieve a list of available tags for theregistry.ocp4.example.com:8443/redhattraining/docker-nginx
container repository.[student@workstation ~]$ skopeo list-tags \ docker://registry.ocp4.example.com:8443/redhattraining/docker-nginx { "Repository": "registry.ocp4.example.com:8443/redhattraining/docker-nginx", "Tags": [ "1.23", "1.23-alpine", "1.23-perl", "1.23-alpine-perl" "latest" ] }
Create a
docker-nginx
pod from theregistry.ocp4.example.com:8443/redhattraining/docker-nginx:1.23
container image. Investigate any pod failures.Use the
oc run
command to create thedocker-nginx
pod.[student@workstation ~]$ oc run docker-nginx \ --image registry.ocp4.example.com:8443/redhattraining/docker-nginx:1.23 pod/docker-nginx created
After a few moments, verify the status of the
docker-nginx
pod.[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-nginx 0/1 Error 0 4s
[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE docker-nginx 0/1 CrashLoopBackOff 2 (17s ago) 38s
The
docker-nginx
pod failed to start.Investigate the pod failure. Retrieve the logs of the
docker-nginx
pod to identify a possible cause of the pod failure.[student@workstation ~]$ oc logs docker-nginx ...output omitted... /docker-entrypoint.sh: Configuration complete; ready for start up 2022/12/02 18:51:45 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 2022/12/02 18:51:45 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
The pod failed to start because of permission issues for the
nginx
directories.Create a debug pod for the
docker-nginx
pod.[student@workstation ~]$ oc debug pod/docker-nginx Starting pod/docker-nginx-debug ... Pod IP: 10.8.0.72 If you don't see a command prompt, try pressing enter. $
From the debug pod, verify the permissions of the
/etc/nginx
and/var/cache/nginx
directories.$ ls -la /etc/ | grep nginx drwxr-xr-x. 3 root root 132 Nov 15 13:14 nginx
$ ls -la /var/cache | grep nginx drwxr-xr-x. 2 root root 6 Oct 19 09:32 nginx
Only the
root
user has permission to thenginx
directories. The pod must therefore run as the privilegedroot
user to work.Retrieve the user ID (UID) of the
docker-nginx
user to determine whether the user is a privileged or unprivileged account. Then, exit the debug pod.$ whoami 1000820000
$ exit Removing debug pod ...
Your UID value might differ from the previous output.
A UID over
0
means that the container's user is anon-root
account. Recall that OpenShift default security policies prevent regular user accounts, such as thedeveloper
user, from running pods and their containers as privileged accounts.Confirm that the
docker-nginx:1.23
image requires theroot
privileged account. Use theskopeo inspect --config
command to view the configuration for the image.[student@workstation ~]$ skopeo inspect --config \ docker://registry.ocp4.example.com:8443/redhattraining/docker-nginx:1.23 ...output omitted... "config": { "ExposedPorts": { "80/tcp": {} }, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "NGINX_VERSION=1.23.3", "NJS_VERSION=0.7.9", "PKG_RELEASE=1~bullseye" ], "Entrypoint": [ "/docker-entrypoint.sh" ], "Cmd": [ "nginx", "-g", "daemon off;" ], "Labels": { "maintainer": "NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e" }, "StopSignal": "SIGQUIT" }, ...output omitted...
The image configuration does not define
USER
metadata, which confirms that the image must run as theroot
privileged user.The
docker-nginx:1-23
container image must run as theroot
privileged user. OpenShift security policies prevent regular cluster users, such as thedeveloper
user, from running containers as theroot
user. Delete thedocker-nginix
pod.[student@workstation ~]$ oc delete pod docker-nginx pod "docker-nginx" deleted
Create a
bitnami-mysql
pod, which uses a copy of the Bitnami community MySQL image. The image is available in theregistry.ocp4.example.com:8443/redhattraining/bitnami-mysql
container repository.A copy and specific tags of the
docker.io/bitnami/mysql
container repository are hosted in the classroom registry. Use theskopeo list-tags
command to identify available tags for the Bitnami MySQL community image in theregistry.ocp4.example.com:8443/redhattraining/bitnami-mysql
container repository.[student@workstation ~]$ skopeo list-tags \ docker://registry.ocp4.example.com:8443/redhattraining/bitnami-mysql { "Repository": "registry.ocp4.example.com:8443/redhattraining/bitnami-mysql", "Tags": [ "8.0.31", "8.0.30", "8.0.29", "8.0.28", "latest" ] }
Retrieve the configuration of the
bitnami-mysql:8.0.31
container image. Determine whether the image requires a privileged account by inspecting image configuration forUSER
metadata.[student@workstation ~]$ skopeo inspect --config \ docker://registry.ocp4.example.com:8443/redhattraining/bitnami-mysql:8.0.31 ...output omitted... "config": "User": "1001", "ExposedPorts": { "3306/tcp": {} }, ....output omitted...
The image defines the
1001
UID, which means that the image does not require a privileged account.Create the
bitnami-mysql
pod with theoc run
command. Use theregistry.ocp4.example.com:8443/redhattraining/bitnami-mysql:8.0.31
container image. Then, wait a few moments and then retrieve the pod's status with theoc get
command.[student@workstation ~]$ oc run bitnami-mysql \ --image registry.ocp4.example.com:8443/redhattraining/bitnami-mysql:8.0.31 pod/bitnami-mysql created
[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE bitnami-mysql 0/1 CrashLoopBackoff 2 (19s ago) 23s
The pod failed to start.
Examine the logs of the
bitnami-mysql
pod to determine the cause of the failure.[student@workstation ~]$ oc logs bitnami-mysql mysql 16:18:00.40 mysql 16:18:00.40 Welcome to the Bitnami mysql container mysql 16:18:00.40 Subscribe to project updates by watching https://github.com/bitnami/containers mysql 16:18:00.40 Submit issues and feature requests at https://github.com/bitnami/containers/issues mysql 16:18:00.40 mysql 16:18:00.41 INFO ==> Starting MySQL setup mysql 16:18:00.42 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars mysql 16:18:00.42 ERROR ==> The MYSQL_ROOT_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development.
The
MYSQL_ROOT_PASSWORD
environment variable must be set for the pod to start.Delete and then re-create the
bitnami-mysql
pod. Specifyredhat123
as the value for theMYSQL_ROOT_PASSWORD
environment variable. After a few moments, verify the status of the pod.[student@workstation ~]$ oc delete pod bitnami-mysql pod "bitnami-mysql" deleted
[student@workstation ~]$ oc run bitnami-mysql \ --image registry.ocp4.example.com:8443/redhattraining/bitnami-mysql:8.0.31 \ --env MYSQL_ROOT_PASSWORD=redhat123 pod/bitnami-mysql created
[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE bitnami-mysql 1/1 Running 0 20s
The
bitnami-mysql
pod successfully started.Determine the UID of the container user in the
bitnami-mysql
pod. Compare this value to the UID in the container image and to the UID range of thepods-images
project.[student@workstation ~]$ oc exec -it bitnami-mysql -- /bin/bash -c "whoami && id" 1000820000 uid=1000820000(1000820000) gid=0(root) groups=0(root),1000820000
[student@workstation ~]$ oc describe project pods-images Name: pods-images ...output omitted... Annotations: openshift.io/description= ...output omitted... openshift.io/sa.scc.supplemental-groups=1000820000/10000 openshift.io/sa.scc.uid-range=1000820000/10000 ...output omitted...
Your values for the UID of the container and the UID range of the project might differ from the previous output.
The container user UID is the same as the specified UID range in the namespace. Notice that the container user UID does not match the
1001
UID of the container image. For a container to use the specified UID of a container image, the pod must be created with a privileged OpenShift user account, such as theadmin
user.
The private classroom registry hosts a copy of a supported MySQL image from Red Hat. Retrieve the list of available tags for the
registry.ocp4.example.com:8443/rhel9/mysql-80
container repository. Compare therhel9/mysql-80
container image release version that is associated with each tag.Use the
skopeo list-tags
command to list the available tags for therhel9/mysql-80
container image.[student@workstation ~]$ skopeo list-tags \ docker://registry.ocp4.example.com:8443/rhel9/mysql-80 { "Repository": "registry.ocp4.example.com:8443/mysql-80", "Tags": [ "1-237", "1-228", "1-228-source", "1-224", "1-224-source", "1", "latest" ] }
Several tags are available:
The
latest
and1
tags are floating tags, which are aliases to other tags, such as the1-237
tag.The
1-228
and1-224
tags are fixed tags, which point to a build of a container.The
1-228-source
and1-224-source
tags are source containers, which provide the necessary sources and license terms to rebuild and distribute the images.
Use the
skopeo inspect
command to compare therhel9/mysql-80
container image release version and SHA IDs that are associated with the identified tags.NOTE
To improve readability, the instructions truncate the SHA-256 strings.
On your system, the commands return the full SHA-256 strings.
[student@workstation ~]$ skopeo inspect \ docker://registry.ocp4.example.com:8443/rhel9/mysql-80:latest ...output omitted... "Name": "registry.ocp4.example.com:8443/rhel9/mysql-80", "Digest": "sha256:d282...f38f", ...output omitted... "Labels": ...output omitted... "name": "rhel9/mysql-80", "release": "237", ...output omitted...
You can also format the output of the
skopeo inspect
command with a Go template. Append the template objects with\n
to add new lines between the results.[student@workstation ~]$ skopeo inspect --format \ "Name: {{.Name}}\n Digest: {{.Digest}}\n Release: {{.Labels.release}}" \ docker://registry.ocp4.example.com:8443/rhel9/mysql-80:latest Name: registry.redhat.io/rhel9/mysql-80 Digest: sha256:d282...f38f Release: 237
[student@workstation ~]$ skopeo inspect --format \ "Name: {{.Name}}\n Digest: {{.Digest}}\n Release: {{.Labels.release}}" \ docker://registry.ocp4.example.com:8443/rhel9/mysql-80:1 Name: registry.redhat.io/rhel9/mysql-80 Digest: sha256:d282...f38f Release: 237
[student@workstation ~]$ skopeo inspect --format \ "Name: {{.Name}}\n Digest: {{.Digest}}\n Release: {{.Labels.release}}" \ docker://registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 Name: registry.redhat.io/rhel9/mysql-80 Digest: sha256:d282...f38f Release: 237
The
latest
,1
, and1-237
tags resolve to the same release versions and SHA IDs. Thelatest
and1
tags are floating tags for the1-237
fixed tag.
The classroom registry hosts a copy and certain tags of the
registry.redhat.io/rhel9/mysql-80
container repository. Use theoc run
command to create arhel9-mysql
pod from theregistry.ocp4.example.com:8443/rhel9/mysql-80:1-228
container image. Verify the status of the pod and then inspect the container logs for any errors.Create a
rhel9-mysql
pod with theregistry.ocp4.example.com:8443/rhel9/mysql-80:1-237
container image.[student@workstation ~]$ oc run rhel9-mysql \ --image registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 pod/rhel9-mysql created
After a few moments, retrieve the pod's status with the
oc get
command.[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE bitnami-mysql 1/1 Running 0 5m16s rhel9-mysql 0/1 CrashLoopBackoff 2 (29s ago) 49s
The pod failed to start.
Retrieve the logs for the
rhel9-mysql
pod to determine why the pod failed.[student@workstation ~]$ oc logs rhel9-mysql => sourcing 20-validate-variables.sh ... You must either specify the following environment variables: MYSQL_USER (regex: '^[a-zA-Z0-9_]+$') MYSQL_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$') MYSQL_DATABASE (regex: '^[a-zA-Z0-9_]+$') Or the following environment variable: MYSQL_ROOT_PASSWORD (regex: '^[a-zA-Z0-9_~!@\#$%^&*()-=<>,.?;:|]+$') Or both. Optional Settings: MYSQL_LOWER_CASE_TABLE_NAMES (default: 0) ...output omitted...
The pod failed because the required environment variables were not set for the container.
Delete the
rhel9-mysql
pod. Create anotherrhel9-mysql
pod and specify the necessary environment variables. Retrieve the status of the pod and inspect the container logs to confirm that the new pod is working.Delete the
rhel9-mysql
pod with theoc delete
command. Wait for the pod to delete before continuing to the next step.[student@workstation ~]$ oc delete pod rhel9-mysql pod "rhel9-mysql" deleted
Create another
rhel9-mysql
pod from theregistry.ocp4.example.com:8443/rhel9/mysql-80:1-237
container image. Use theoc run
command with the--env
option to specify the following environment variables and their values:| Variable | Value | | --- | --- | |
MYSQL_USER
|redhat
| |MYSQL_PASSWORD
|redhat123
| |MYSQL_DATABASE
|worldx
|[student@workstation ~]$ oc run rhel9-mysql \ --image registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 \ --env MYSQL_USER=redhat \ --env MYSQL_PASSWORD=redhat123 \ --env MYSQL_DATABASE=worldx pod/mysql created
After a few moments, retrieve the status of the
rhel9-mysql
pod with theoc get
command. View the container logs to confirm that the database on therhel9-mysql
pod is ready to accept connections.[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE bitnami-mysql 1/1 Running 0 10m rhel9-mysql 1/1 Running 0 20s
[student@workstation ~]$ oc logs rhel9-mysql ...output omitted... 2022-11-02T20:14:14.333599Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/lib/mysql/mysqlx.sock 2022-11-02T20:14:14.333641Z 0 [System] [MY-010931] [Server] /usr/libexec/mysqld: ready for connections. Version: '8.0.30' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution.
The
rhel9-mysql
pod is ready to accept connections.
Determine the location of the MySQL database files for the
rhel9-mysql
pod. Confirm that the directory contains theworldx
database.Use the
oc image
command to inspect therhel9/mysql-80:1-228
image in theregistry.ocp4.example.com:8443
classroom registry.[student@workstation ~]$ oc image info \ registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 Name: registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 ...output omitted... Command: run-mysqld Working Dir: /opt/app-root/src User: 27 Exposes Ports: 3306/tcp Environment: container=oci STI_SCRIPTS_URL=image:///usr/libexec/s2i STI_SCRIPTS_PATH=/usr/libexec/s2i APP_ROOT=/opt/app-root PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PLATFORM=el9 MYSQL_VERSION=8.0 APP_DATA=/opt/app-root/src HOME=/var/lib/mysql
The container manifest sets the
HOME
environment variable for the container user to the/var/lib/mysql
directory.Use the
oc exec
command to list the contents of the/var/lib/mysql
directory.[student@workstation ~]$ oc exec -it rhel9-mysql -- ls -la /var/lib/mysql total 12 drwxrwxr-x. 1 mysql root 102 Nov 2 20:41 . drwxr-xr-x. 1 root root 19 Oct 24 18:47 .. drwxrwxr-x. 1 mysql root 4096 Nov 2 20:54 data srwxrwxrwx. 1 mysql mysql 0 Nov 2 20:41 mysql.sock -rw-------. 1 mysql mysql 2 Nov 2 20:41 mysql.sock.lock srwxrwxrwx. 1 mysql mysql 0 Nov 2 20:41 mysqlx.sock -rw-------. 1 mysql mysql 2 Nov 2 20:41 mysqlx.sock.lock
A
data
directory exists within the/var/lib/mysql
directory.Use the
oc exec
command again to list the contents of the/var/lib/mysql/data
directory.[student@workstation ~]$ oc exec -it rhel9-mysql \ -- ls -la /var/lib/mysql/data | grep worldx drwxr-x---. 2 mysql mysql 6 Nov 2 20:41 worldx
The
/var/lib/mysql/data
directory contains theworldx
database with theworldx
directory.
Determine the IP address of the
rhel9-mysql
pod. Next, create another MySQL pod, namedmysqlclient
, to access therhel9-mysql
pod. Confirm that themysqlclient
pod can view the available databases on therhel9-mysql
pod with themysqlshow
command.Identify the IP address of the
rhel9-mysql
pod.[student@workstation ~]$ oc get pods rhel9-mysql -o json | jq .status.podIP "10.8.0.109"
Note the IP address. Your IP address might differ from the previous output.
Use the
oc run
command to create a pod namedmysqlclient
that uses theregistry.ocp4.example.com:8443/rhel9/mysql-80:1-237
container image. Set the value of theMYSQL_ROOT_PASSWORD
environment variable toredhat123
, and then confirm that the pod is running.[student@workstation ~]$ oc run mysqlclient \ --image registry.ocp4.example.com:8443/rhel9/mysql-80:1-237 \ --env MYSQL_ROOT_PASSWORD=redhat123 pod/mysqlclient created
[student@workstation ~]$ oc get pods NAME READY STATUS RESTARTS AGE bitnami-mysql 1/1 Running 0 15m mysqlclient 1/1 Running 0 19s rhel9-mysql 1/1 Running 0 5m
Use the
oc exec
command with the-it
options to execute themysqlshow
command on themysqlclient
pod. Connect as theredhat
user and specify the host as the IP address of therhel9-mysql
pod. When prompted, enterredhat123
for the password.[student@workstation ~]$ oc exec -it mysqlclient \ -- mysqlshow -u redhat -p -h 10.8.0.109 Enter password: redhat123 +--------------------+ | Databases | +--------------------+ | information_schema | | performance_schema | | worldx | +--------------------+
The
worldx
database on therhel9-mysql
pod is accessible to themysql-client
pod.
Finish
On the workstation
machine, use the lab
command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.
[student@workstation ~]$ lab finish pods-images