Using PersistentVolumes in Kubernetes

PersistentVolumes provide a way to treat storage as a dynamic resource in Kubernetes. This lab will allow you to demonstrate your knowledge of PersistentVolumes. You will mount some persistent storage to a container using a PersistentVolume and a PersistentVolumeClaim.

Create a custom Storage Class by using “`vi localdisk.yml`.

apiVersion: storage.k8s.io/v1 
kind: StorageClass 
metadata: 
  name: localdisk 
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

Finish creating the Storage Class by using kubectl create -f localdisk.yml.
Create the PersistentVolume by using vi host-pv.yml.

kind: PersistentVolume 
apiVersion: v1 
metadata: 
   name: host-pv 
spec: 
   storageClassName: localdisk
   persistentVolumeReclaimPolicy: Recycle 
   capacity: 
      storage: 1Gi 
   accessModes: 
      - ReadWriteOnce 
   hostPath: 
      path: /var/output

Finish creating the PersistentVolume by using kubectl create -f host-pv.yml.

Check the status of the PersistenVolume by using kubectl get pv

Create a PersistentVolumeClaim

Start creating a PersistentVolumeClaim for the PersistentVolume to bind to by using vi host-pvc.yml.

apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
   name: host-pvc 
spec: 
   storageClassName: localdisk 
   accessModes: 
      - ReadWriteOnce 
   resources: 
      requests: 
         storage: 100Mi

Finish creating the PersistentVolumeClaim by using kubectl create -f host-pvc.yml.

Check the status of the PersistentVolume and PersistentVolumeClaim to verify that they have been bound:

kubectl get pv
kubectl get pvc

Create a Pod That Uses a PersistentVolume for Storage

Create a Pod that uses the PersistentVolumeClaim by using vi pv-pod.yml.

apiVersion: v1 
kind: Pod 
metadata: 
   name: pv-pod 
spec: 
   containers: 
      - name: busybox 
        image: busybox 
        command: ['sh', '-c', 'while true; do echo Success! > /output/success.txt; sleep 5; done'] 

Mount the PersistentVolume to the /output location by adding the following, which should be level with the containers spec in terms of indentation:

volumes: 
 - name: pv-storage 
   persistentVolumeClaim: 
      claimName: host-pvc

In the containers spec, below the command, set the list of volume mounts by using:

volumeMounts: 
- name: pv-storage 
  mountPath: /output 

Finish creating the Pod by using kubectl create -f pv-pod.yml.

Check that the Pod is up and running by using kubectl get pods.

If you wish, you can log in to the worker node and verify the output data by using cat /var/output/success.txt.

Tutorial – Launching OCI Linux Instance

Steps:

  • Create a key pair.
  • Choose a compartment for your resources.
  • Create a cloud network.
  • Launch an instance.

Choosing a Compartment

  1. The first resource you create is the cloud network. Open the navigation menu, click Networking, and then click Virtual Cloud Networks.
  2. Select the Sandbox compartment (or the compartment designated by your administrator) from the list on the left, as shown in the image. If the Sandbox compartment does not exist, you can create it as described in Creating a Compartment.

Create a cloud network.

  1. Open the navigation menu, click Networking, and then click Virtual Cloud Networks.
  2. Click Start VCN Wizard.
  3. Select Create VCN with Internet Connectivity, and then click Start VCN Wizard.
  4. Enter the values depends on what you want press next.

Launch an instance.

  1. Open the navigation menu and click Compute. Under Compute, click Instances.
  2. Click Create instance.
  3. Enter a name for the instance, for example: <your initials>-Instance. Avoid entering confidential information.
  4. In the Placement section, accept the default Availability domain.
  5. In the Image and shape section, make the following selections:
  6. In the Image section, accept the default, Oracle Linux.
  7. In the Shape section, click Change shape. which will allow you to choose the instance size.
  8. In the Networking section, For Primary network, leave Select existing virtual cloud network selected and For Subnet, leave Select existing subnet selected.
  9. Select the Assign a public IPv4 address option. This creates a public IP address for the instance, which you need to access the instance. If you have trouble selecting this option, confirm that you selected the public subnet that was created with your VCN, not a private subnet.
  10. In  Add SSH keys section, generate an SSH key pair or upload your own public key

Enjoy
osama

Managing Container Storage with Kubernetes Volumes

Kubernetes volumes offer a simple way to mount external storage to containers. This lab will test your knowledge of volumes as you provide storage to some containers according to a provided specification. This will allow you to practice what you know about using Kubernetes volumes.

Create a Pod That Outputs Data to the Host Using a Volume

  • Create a Pod that will interact with the host file system by using vi maintenance-pod.yml.
apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
    containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
  • Under the basic YAML, begin creating volumes, which should be level with the containers spec:
volumes:
- name: output-vol
  hostPath:
      path: /var/data
  • In the containers spec of the basic YAML, add a line for volume mounts:
volumeMounts:
- name: output-vol
  mountPath: /output

The complete YAML will be

apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
  containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
      - name: output-vol
        mountPath: /output
  volumes:
   - name: output-vol
     hostPath:
      path: /var/data

Create a Multi-Container Pod That Shares Data Between Containers Using a Volume

  1. Create another YAML file for a shared-data multi-container Pod by using vi shared-data-pod.yml
  2. Start with the basic Pod definition and add multiple containers, where the first container will write the output.txt file and the second container will read the output.txt file:
apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']

Set up the volumes, again at the same level as containers with an emptyDir volume that only exists to share data between two containers in a simple way:

volumes:
- name: shared-vol
  emptyDir: {}

Mount that volume between the two containers by adding the following lines under command for the busybox1 container:

volumeMounts:
- name: shared-vol
  mountPath: /output

For the busybox2 container, add the following lines to mount the same volume under command to complete creating the shared file:

volumeMounts:
- name: shared-vol
  mountPath: /input

The complete file

Finish creating the multi-container Pod using kubectl create -f shared-data-pod.yml.

apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /output
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /input
    volumes:
    - name: shared-vol
    emptyDir: {}

And you can now apply the YAML file.

Cheers

Osama

Exposing Kubernetes Pods Using Services

Kubernetes Services are a great way to combine Kubernetes networking with the dynamic and often automated nature of Kubernetes applications. In this lab, you will use Services to expose existing Kubernetes Pods. This will allow you to practice your skills with Kubernetes Services.

Expose the Pods f as an Internal Service

apiVersion: v1 
kind: Service 
metadata: 
  name: user-db-svc 
spec: 
  type: ClusterIP 
  selector: 
    app: user-db 
  ports: 
  - protocol: TCP 
    port: 80 
    targetPort: 80

Expose the Pods as an External Service

apiVersion: v1 
kind: Service 
metadata: 
  name: web-frontend-svc 
spec: 
  type: NodePort 
  selector: 
    app: web-frontend 
  ports: 
  - protocol: TCP 
    port: 80 
    targetPort: 80 
    nodePort: 30080

Create a Manifest for a Static Pod

Static pods are a great way to run a pod on a single node without the involvement of the Kubernetes control plane. In this lab, you will have a chance to exercise your knowledge of static pods by creating them in an existing cluster.

sudo vi /etc/kubernetes/manifests/example.yml

Anything under this path will be managed by kubelet.

Add the following line

apiVersion: v1
kind: Pod
metadata:
  name: beebox-diagnostic
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    ports:
    - containerPort: 80

Restart kubelet to start the static pod:

sudo systemctl restart kubelet

Now if you try to delete it will work because it’s managed by kubelet.

Cheers
Osama

DaemonSets Example

Configure the cluster to create a pod on each worker node that periodically deleted the contents of the /etc/beebox/tmp on the worker node.

Very Simple by applying the following

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: beebox-cleanup
spec:
  selector:
    matchLabels:
      app: beebox-cleanup
  template:
    metadata:
      labels:
        app: beebox-cleanup
    spec:
      containers:
      - name: busybox
        image: busybox:1.27
        command: ['sh', '-c', 'while true; do rm -rf /beebox-temp/*; sleep 60; done']
        volumeMounts:
        - name: beebox-tmp
          mountPath: /beebox-temp
      volumes:
      - name: beebox-tmp
        hostPath:
          path: /etc/beebox/tmp

Apply

kubectl apply -f daemonset.yml

Cheers

Osama

K8s Types of probe 

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

Probe outcome

Each probe has one of three results:

  • Success :- The container passed the diagnostic.
  • Failure :- The container failed the diagnostic.
  • Unknown :- The diagnostic failed (no action should be taken, and the kubelet will make further checks).

Types of probe 

The kubelet can optionally perform and react to three kinds of probes on running containers:

  • livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a liveness probe, the default state is Success.

  • readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a container does not provide a readiness probe, the default state is Success.

  • startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a startup probe, the default state is Success.

When should you use a liveness probe

If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod’s restartPolicy.

If you’d like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.

When should you use a readiness probe

If you’d like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding.

If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.

If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is available. This helps you avoid directing traffic to Pods that can only respond with error messages.

If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe. However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you might prefer a readiness probe.

When should you use a startup probe

Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow.

If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.

Cheers
Osama

Passing Configuration Data to a Kubernetes Container

One of these applications is a simple Nginx web server. This server is used as part of a secure backend application, and the company would like it to be configured to use HTTP basic authentication.

This will require an htpasswd file as well as a custom Nginx config file. In order to deploy this Nginx server to the cluster with good configuration practices, you will need to load the custom Nginx configuration from a ConfigMap (this already exists) and use a Secret to store the htpasswd data.

Create a Pod with a container running the nginx:1.19.1 image. Supply a custom Nginx configuration using a ConfigMap, and populate an htpasswd file using a Secret.

Generate an htpasswd file:

htpasswd -c .htpasswd user

View the file’s contents:

cat .htpasswd

Create a Secret containing the htpasswd data:

kubectl create secret generic nginx-htpasswd --from-file .htpasswd

Delete the .htpasswd file:

rm .htpasswd

Create the Nginx Pod

Create pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.19.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: config-volume
      mountPath: /etc/nginx
    - name: htpasswd-volume
      mountPath: /etc/nginx/conf
  volumes:
  - name: config-volume
    configMap:
      name: nginx-config
  - name: htpasswd-volume
    secret:
      secretName: nginx-htpasswd

View the existing ConfigMap:

kubectl get cm

we need to create the configMap for nginx-config:-

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
 nginx.conf: |
   user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;

     events {
       worker_connections  1024;
      }
     http {
      server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        auth_basic "Secure Site";
        auth_basic_user_file conf/.htpasswd;
      }
     }

Get more info about nginx-config:

kubectl describe cm nginx-config

Apply the pod

kubectl apply -f pod.yml

Within the existing busybox pod, without using credentials, verify everything is working:

kubectl exec busybox -- curl <NGINX_POD_IP>

We’ll see HTML for the 401 Authorization Required page — but this is a good thing, as it means our setup is working as expected.

curl: (6) Couldn't resolve host 'user'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   179  100   179    0     0  62174      0 --:--:-- --:--:-- --:--:-- 89500
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>

Run the same command again using credentials (including the password you created at the beginning of the lab) to verify everything is working:

kubectl exec busybox -- curl -u user:<PASSWORD> <NGINX_POD_IP>
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   612  100   612    0     0  48846      0 --:--:-- --:--:-- --:--:-- 51000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Cheers

Osama

role vs rolebinding in kubernetes

You need to know the difference between

  • Role.
  • Rolebinding.
  • ClusterRole.

Please refer the Kubernetes documentation here

A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.

ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can’t be both.

A rolebinding is namespace scoped and clusterrolebinding is cluster scoped i.e across all namespace.

ClusterRoles and ClusterRoleBindings are useful in the following cases:

  1. Give permissions for non-namespaced resources like nodes
  2. Give permissions for resources in all the namespaces of a cluster
  3. Give permissions for non-resource endpoints like /healthz

A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding’s namespace. This kind of reference lets you define a set of common roles across your cluster, then reuse them within multiple namespaces.

example

Create a Role for the dev User

  1. Create a role spec file role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: beebox-mobile
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "watch", "list"]

2. Save and exit the file by pressing Escape followed by :wq.

3. apply the role.

kubectl apply -f file-name.yml

Bind the Role to the dev User and Verify Your Setup Works

  1. Create the RoleBinding spec file:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-reader
  namespace: beebox-mobile
subjects:
- kind: User
  name: dev
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

2. Apply the role, by running

kubectl apply -f file-name.yml

Cheers

Osama

Backing up and Restoring Kubernetes Data in etcd

Backups are an important part of any resilient system. Kubernetes is no exception. In this post , I will show you how to backup/restore kubernetes data.

Back Up the etcd Data

  1. Look up the value for the key cluster.name in the etcd cluster:
ETCDCTL_API=3 etcdctl get cluster.name \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

2. Back up etcd using etcdctl and the provided etcd certificates:

ETCDCTL_API=3 etcdctl snapshot save /home/cloud_user/etcd_backup.db \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

3. Reset etcd by removing all existing etcd data

Note: you don’t have to do this step if this is production, I am only doing this to show how to restore the data.

sudo systemctl stop etcd
sudo rm -rf /var/lib/etcd

Restore the etcd Data from the Backup

  1. Restore the etcd data from the backup (this command spins up a temporary etcd cluster, saving the data from the backup file to a new data directory in the same location where the previous data directory was):
sudo ETCDCTL_API=3 etcdctl snapshot restore /home/cloud_user/etcd_backup.db \
  --initial-cluster etcd-restore=https://10.0.1.101:2380 \
  --initial-advertise-peer-urls https://10.0.1.101:2380 \
  --name etcd-restore \
  --data-dir /var/lib/etcd

2. Set ownership on the new data directory

sudo chown -R etcd:etcd /var/lib/etcd

3. Start etcd

sudo systemctl start etcd

4. Verify the restored data is present by looking up the value for the key cluster.name again:

ETCDCTL_API=3 etcdctl get cluster.name \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

Cheers

Osama