Guided Exercise: Manage non-Shared Storage with Stateful Sets

·

7 min read

Deploy a replicated web server by using a deployment and verify that all web server pods share a PV; and deploy a replicated MySQL database by using a stateful set and verify that each database instance gets a dedicated PV.

Outcomes

In this exercise, you deploy a web server with a shared persistent volume between the replicas, and a database server from a stateful set with dedicated persistent volumes for each instance.

  • Deploy a web server with persistent storage.

  • Add data to the persistent storage.

  • Scale the web server deployment and observe the data that is shared with the replicas.

  • Create a database server with a stateful set by using a YAML manifest file.

  • Verify that each instance from the stateful set has a persistent volume claim.

As the student user on the workstation machine, use the lab command to prepare your system for this exercise.

This command ensures that all resources are available for this exercise.

[student@workstation ~]$ lab start storage-statefulsets

Procedure 5.4. Instructions

  1. Create a web server deployment named web-server. Use the registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latest container image.

    1. Log in to the OpenShift cluster as the developer user with the developer password.

       [student@workstation]$ oc login -u developer -p developer \
       https://api.ocp4.example.com:6443
       ...output omitted...
      
    2. Change to the storage-statefulsets project.

       [student@workstation]$ oc project storage-statefulsets
       Now using project "storage-statefulsets" on server ...output omitted...
      
    3. Create the web-server deployment. Ignore the warning message.

       [student@workstation ~]$ oc create deployment web-server \
         --image registry.ocp4.example.com:8443/redhattraining/hello-world-nginx:latest
       Warning: would violate PodSecurity "restricted:v1.24":
       ...output omitted...
       deployment.apps/web-server created
      
    4. Verify the deployment status.

       [student@workstation ~]$ oc get pods -l app=web-server
       NAME                          READY   STATUS    RESTARTS  AGE
       web-server-7d7cb4cdc7-t7hx8   1/1     Running   0         4s
      
  2. Add the web-pv persistent volume to the web-server deployment. Use the default storage class and the following information to create the persistent volume:

    | Field | Value | | --- | --- | | Name | web-pv | | Type | persistentVolumeClaim | | Claim mode | rwo | | Claim size | 5Gi | | Mount path | /var/www/html | | Claim name | web-pv-claim |

    1. Add the web-pv persistent volume to the web-server deployment. Ignore the warning message.

       [student@workstation ~]$ oc set volumes deployment/web-server \
         --add --name web-pv --type persistentVolumeClaim --claim-mode rwo \
         --claim-size 5Gi --mount-path /var/www/html --claim-name web-pv-claim
       Warning: would violate PodSecurity "restricted:v1.24":
       ...output omitted...
       deployment.apps/web-server volume updated
      

      Because a storage class was not specified with the --claim-class option, the command uses the default storage class to create the persistent volume.

    2. Verify the deployment status. Notice that a new pod is created.

       [student@workstation ~]$ oc get pods -l app=web-server
       NAME                          READY   STATUS   RESTARTS  AGE
       web-server-64689877c6-qgsvt   1/1     Running  0         5s
      
    3. Verify the persistent volume status.

       [student@workstation ~]$ oc get pvc
       NAME           STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
       web-pv-claim   Bound    pvc-42...63ab   5Gi        RWO            nfs-storage    29s
      

      The default storage class, nfs-storage, provisioned the persistent volume.

  3. Add data to the PV by using the exec command.

    1. List pods to retrieve the web-server pod name.

       [student@workstation ~]$ oc get pods
       NAME                          READY   STATUS    RESTARTS   AGE
       web-server-64689877c6-mdr6f   1/1     Running   0          17m
      

      The pod name might differ in your output.

    2. Use the exec command to add the pod name that you retrieved from the previous step to the /var/www/html/index.html file on the pod. Then, retrieve the contents of the /var/www/hmtl/index.html file to confirm that the pod name is in the file.

       [student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
         -- /bin/bash -c \
         'echo "Hello, World from ${HOSTNAME}" > /var/www/html/index.html'
      
       [student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
         -- cat /var/www/html/index.html
       Hello, World from web-server-64689877c6-mdr6f
      
  4. Scale the web-server deployment to two replicas and confirm that an additional pod is created.

    1. Scale the web-server deployment to two replicas.

       [student@workstation ~]$ oc scale deployment web-server --replicas 2
       deployment.apps/web-server scaled
      
    2. Verify the replica status and retrieve the pod names.

       [student@workstation ~]$ oc get pods
       NAME                          READY   STATUS    RESTARTS   AGE
       web-server-64689877c6-mbj6g   1/1     Running   0          2s
       web-server-64689877c6-mdr6f   1/1     Running   0          17m
      

      The pod names might differ from your output.

  5. Retrieve the content of the /var/www/html/index.html file on the web-server pods by using the oc exec command to verify that the file is the same in both pods.

    1. Verify that the /var/www/html/index.html file is the same in both pods.

       [student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mbj6g \
         -- cat /var/www/html/index.html
       Hello, World from web-server-64689877c6-mdr6f
      
       [student@workstation ~]$ oc exec -it pod/web-server-64689877c6-mdr6f \
         -- cat /var/www/html/index.html
       Hello, World from web-server-64689877c6-mdr6f
      

      Notice that both files show the name of the first instance, because they share the persistent volume.

  6. Create a database server with a stateful set by using the statefulset-db.yml file in the /home/student/DO180/labs/storage-statefulsets directory. Update the file with the following information:

    | Field | Value | | --- | --- | | metadata.name | dbserver | | spec.selector.matchLabels.app | database | | spec.template.metadata.labels.app | database | | spec.template.spec.containers.name | dbserver | | spec.template.spec.containers.volumeMounts.name | data | | spec.template.spec.containers.volumeMounts.mountPath | /var/lib/mysql | | spec.volumeClaimTemplates.metadata.name | data | | spec.volumeClaimTemplates.spec.storageClassName | lvms-vg1 |

    1. Open the /home/student/DO180/labs/storage-statefulsets/statefulset-db.yml file in an editor. Replace the <CHANGE_ME> objects with values from the previous table:

       apiVersion: apps/v1
       kind: StatefulSet
       metadata:
         name: dbserver
       spec:
         selector:
           matchLabels:
             app: database
         replicas: 2
         template:
           metadata:
             labels:
               app: database
           spec:
             terminationGracePeriodSeconds: 10
             containers:
             - name: dbserver
               image: registry.ocp4.example.com:8443/redhattraining/mysql-app:v1
               ports:
               - name: database
                 containerPort: 3306
               env:
               - name: MYSQL_USER
                 value: "redhat"
               - name: MYSQL_PASSWORD
                 value: "redhat123"
               - name: MYSQL_DATABASE
                 value: "sakila"
               volumeMounts:
               - name: data
                 mountPath: /var/lib/mysql
         volumeClaimTemplates:
         - metadata:
             name: data
           spec:
             accessModes: [ "ReadWriteOnce" ]
             storageClassName: "lvms-vg1"
             resources:
               requests:
                 storage: 1Gi
      
    2. Create the database server by using the oc create -f /home/student/DO180/labs/storage-statefulsets/statefulset-db.yml command. Ignore the warning message.

       [student@workstation ~]$ oc create -f \
         /home/student/DO180/labs/storage-statefulsets/statefulset-db.yml
       Warning: would violate PodSecurity "restricted:v1.24":
       ...output omitted...
       statefulset.apps/bdserver created
      
    3. Wait a few moments and then verify the status of the stateful set and its instances.

       [student@workstation ~]$ oc get statefulset
       NAME      READY      AGE
       dbserver  2/2      10s
      
       [student@workstation ~]$ oc get pods -l app=database
       NAME           READY   STATUS    ...
       dbserver-0   1/1     Running    ...
       dbserver-1   1/1     Running    ...
      
    4. Use the exec command to add data to each of the stateful set pods.

       [student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
         "mysql -uredhat -predhat123 sakila -e 'create table items (count INT);'"
      
       mysql: [Warning] Using a password on the command line interface can be insecure.
      
       [student@workstation ~]$ oc exec -it pod/dbserver-1 -- /bin/bash -c \
         "mysql -uredhat -predhat123 sakila -e 'create table inventory (count INT);'"
      
       mysql: [Warning] Using a password on the command line interface can be insecure.
      
  7. Confirm that each instance from the dbserver stateful set has a persistent volume claim. Then, verify that each persistent volume claim contains unique data.

    1. Confirm that the persistent volume claims have a Bound status.

       [student@workstation ~]$ oc get pvc -l app=database
       NAME                  STATUS  ...  CAPACITY     ACCESS MODE  ...
       data-dbserver-0   Bound   ...  1Gi         RWO          ...
       data-dbserver-1   Bound   ...  1Gi         RWO            ...
      
    2. Verify that each instance from the dbserver stateful set has its own persistent volume claim by using the oc get pod pod-name -o json | jq .spec.volumes[0].persistentVolumeClaim.claimName command.

       [student@workstation ~]$ oc get pod dbserver-0 -o json | \
         jq .spec.volumes[0].persistentVolumeClaim.claimName
       "data-dbserver-0"
      
       [student@workstation ~]$ oc get pod dbserver-1 -o json | \
         jq .spec.volumes[0].persistentVolumeClaim.claimName
       "data-dbserver-1"
      
    3. Application-level clustering is not enabled for the dbserver stateful set. Verify that each instance of the dbserver stateful set has unique data.

       [student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
         "mysql -uredhat -predhat123 sakila -e 'show tables;'"
       mysql: [Warning] Using a password on the command line interface can be insecure.
       ------------------
       | Tables_in_sakila |
       ------------------
       | items            |
       ------------------
      
       [student@workstation ~]$ oc exec -it pod/dbserver-1 -- /bin/bash -c \
         "mysql -uredhat -predhat123 sakila -e 'show tables;'"
       mysql: [Warning] Using a password on the command line interface can be insecure.
       ------------------
       | Tables_in_sakila |
       ------------------
       | inventory        |
       ------------------
      
  8. Delete a pod in the dbserver stateful set. Confirm that a new pod is created and that the pod uses the PVC from the previous pod. Verify that the previously added table exists in the sakila database.

    1. Delete the dbserver-0 pod in the dbserver stateful set. Confirm that a new pod is generated for the stateful set. Then, confirm that the data-dbserver-0 PVC still exists.

       [student@workstation ~]$ oc delete pod dbserver-0
       pod "dbserver-0" deleted
      
       [student@workstation ~]$ oc get pods -l app=database
      
       NAME         READY   STATUS    RESTARTS   AGE
       dbserver-0   1/1     Running   0          4s
       dbserver-1   1/1     Running   0          5m
      
       [student@workstation ~]$ oc get pvc -l app=database
       NAME                  STATUS  ...  CAPACITY     ACCESS MODE  ...
       data-dbserver-0   Bound   ...  1Gi         RWO          ...
       data-dbserver-1   Bound   ...  1Gi         RWO            ...
      
    2. Use the exec command to verify that the new dbserver-0 pod has the items table in the sakila database.

       [student@workstation ~]$ oc exec -it pod/dbserver-0 -- /bin/bash -c \
         "mysql -uredhat -predhat123 sakila -e 'show tables;'"
       mysql: [Warning] Using a password on the command line interface can be insecure.
       ------------------
       | Tables_in_sakila |
       ------------------
       | items            |
       ------------------
      

Finish

On the workstation machine, use the lab command to complete this exercise. This step is important to ensure that resources from previous exercises do not impact upcoming exercises.

[student@workstation ~]$ lab finish storage-statefulsets