Guided Exercise: Manage Long-lived and Short-lived Applications Using the Kubernetes Workload API


5 min read

Deploy a batch application managed by a job resource and a database server that a deployment resource manages by using the Kubernetes command-line interface.


In this exercise, you deploy a database server and a batch application that are both managed by workload resources.

  • Create deployments.

  • Update environment variables on a pod template.

  • Create and run job resources.

  • Retrieve the logs and termination status of a job.

  • View the pod template of a job resource.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command ensures that resources are available for the exercise.

[student@workstation ~]$ lab start deploy-workloads

Procedure 4.2. Instructions

  1. As the developer user, create a MySQL deployment in a new project.

    1. Log in as the developer user with the developer password.

       [student@workstation ~]$ oc login -u developer -p developer \
       ...output omitted...
    2. Create a project named deploy-workloads.

       [student@workstation ~]$ oc new-project deploy-workloads
       Now using project "deploy-workloads" on server "".
       ...output omitted...
    3. Create a deployment that runs an ephemeral MySQL server.

       [student@workstation ~]$ oc create deployment my-db \
       Warning: would violate PodSecurity "restricted:v1.24"
       ...output omitted...
       deployment.apps/my-db created


      It is safe to ignore pod security warnings for exercises in this course. OpenShift uses the Security Context Constraints controller to provide safe defaults for pod security.

    4. Retrieve the status of the deployment.

       [student@workstation ~]$ oc get deployments
       my-db   0/1     1            0           67s

      The deployment never has a ready instance.

    5. Retrieve the status of the created pod. Your pod name might differ from the output.

       [student@workstation ~]$ oc get pods
       NAME                     READY   STATUS             RESTARTS      AGE
       my-db-8567b478dd-d28f7   0/1     CrashLoopBackOff   4 (60s ago)   2m35s

      The pod fails to start and repeatedly crashes.

    6. Review the logs for the pod to determine why it fails to start.

       [student@workstation ~]$ oc logs deploy/my-db
       ...output omitted...
       You must either specify the following environment variables:
         MYSQL_USER (regex: '^$')
         MYSQL_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]$')
         MYSQL_DATABASE (regex: '^$')
       Or the following environment variable:
         MYSQL_ROOT_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]$')
       ...output omitted...

      Note that the container fails to start due to missing environment variables.

  2. Fix the database deployment and verify that the server is running.

    1. Set the MYSQL_ROOT_PASSWORD environment variable.

       [student@workstation ~]$ oc set env deployment/my-db \
         MYSQL_USER=developer \
         MYSQL_PASSWORD=developer \
       Warning: would violate PodSecurity "restricted:v1.24":
       ...output omitted...
       deployment.apps/my-db updated
    2. Retrieve the list of deployments and observe that the my-db deployment has a running pod.

       [student@workstation ~]$ oc get deployments
       my-db   1/1     1            1           4m50s
    3. Retrieve the internal IP address of the MySQL pod within the list of all pods.

       [student@workstation ~]$ oc get pods -o wide
       NAME                     READY   STATUS    RESTARTS   AGE   IP        ...
       my-db-748c97d478-g8xc9   1/1     Running   0          64s ...

      The -o wide option enables additional output, such as IP addresses. Your IP address value might differ from the previous output.

    4. Verify that the database server is running, by running a query. Replace the IP address with the one that you retrieved in the preceding step.

       [student@workstation ~]$ oc run -it db-test --restart=Never \
         --image \
         -- mysql sampledb -h -u developer --password=developer \
         -e "select 1;"
       ...output omitted...
       | 1 |
       | 1 |
  3. Delete the database server pod and observe that the deployment causes the pod to be re-created.

    1. Delete the existing MySQL pod by using the label that is associated with the deployment.

       [student@workstation ~]$ oc delete pod -l app=my-db
       pod "my-db-84c8995d5-2sssl" deleted
    2. Retrieve the information for the MySQL pod and observe that it is newly created. Your pod name might differ in your output.

       [student@workstation ~]$ oc get pod -l app=my-db
       NAME                    READY   STATUS    RESTARTS   AGE
       my-db-fbccb9447-p99jd   1/1     Running   0          6s
  4. Create and apply a job resource that prints the time and date repeatedly.

    1. Create a job resource called date-loop that runs a script. Ignore the warning.

       [student@workstation ~]$ oc create job date-loop \
         --image \
         -- /bin/bash -c "for i in {1..30}; do date; done"
       Warning: would violate PodSecurity "restricted:v1.24":
       ...output omitted...
       job.batch/date-loop created
    2. Retrieve the job resource to review the pod specification.

       [student@workstation ~]$ oc get job date-loop -o yaml
       ...output omitted...
             - command: 
               - /bin/bash
               - -c
               - for i in {1..30}; do date; done
               imagePullPolicy: Always
               name: date-loop
               resources: {}
               terminationMessagePath: /dev/termination-log
               terminationMessagePolicy: File
             dnsPolicy: ClusterFirst
             restartPolicy: Never 
             schedulerName: default-scheduler
             securityContext: {}
             terminationGracePeriodSeconds: 30
       ...output omitted...

      The command object, which specifies the defined script to execute within the pod.

      Sets the container image for the pod.

      Defines the restart policy for the pod. Kubernetes does not restart the job pod after the pod exits.

    3. List the jobs to see that the date-loop job completed successfully.

       [student@workstation ~]$ oc get jobs
       date-loop     1/1           7s         8s

      You might need to wait for the script to finish and run the command again.

    4. Retrieve the logs for the associated pod. The log values might differ in your output.

       [student@workstation ~]$ oc logs job/date-loop
       Fri Nov 18 14:50:56 UTC 2022
       Fri Nov 18 14:50:59 UTC 2022
       ...output omitted...
  5. Delete the pod for the date-loop job and observe that the pod is not created again.

    1. Delete the associated pod.

       [student@workstation ~]$ oc delete pod -l job-name=date-loop
       pod "date-loop-wvn2q" deleted
    2. View the list of pods and observe that the pod is not re-created for the job.

       [student@workstation ~]$ oc get pod -l job-name=date-loop
       No resources found in deploy-workloads namespace.
    3. Verify that the job status is still listed as successfully completed.

       [student@workstation ~]$ oc get job -l job-name=date-loop
       date-loop   1/1           7s         7m36s


On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish deploy-workloads