Multi POd Example – k8s

Create a Multi-Container Pod

  1. Create a YAML file named multi.yml:
apiVersion: v1
kind: Pod
metadata:
  name: multi
  namespace: baz
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis

Create a Complex Multi-Container Pod

apiVersion: v1
kind: Pod
metadata:
  name: logging-sidecar
  namespace: baz
spec:
  containers:
  - name: busybox1
    image: busybox
    command: ['sh', '-c', 'while true; do echo Logging data > /output/output.log; sleep 5; done']
    volumeMounts:
    - name: sharedvol
      mountPath: /output
  - name: sidecar
    image: busybox
    command: ['sh', '-c', 'tail -f /input/output.log']
    volumeMounts:
    - name: sharedvol
      mountPath: /input
  volumes:
  - name: sharedvol
    emptyDir: {}

K8s Networkpolicy Example

Create a Networkpolicy That Denies All Access to the Maintenance Pod

  1. Let’s create a network Policy that Denies All Access to the Maintenance Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-maintenance
  namespace: foo
spec:
  podSelector:
    matchLabels:
      app: maintenance
  policyTypes:
  - Ingress
  - Egress

Create a Networkpolicy That Allows All Pods in the users-backend Namespace to Communicate with Each Other Only on a Specific Port

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-users-backend-80
  namespace: users-backend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          app: users-backend
    ports:
    - protocol: TCP
      port: 80

Cheers

Osama

FREE LEARNING ON UDEMY

The below is now Free courses on Udemy, not sure till when so enjoy as you can.

Free learning on Udemy DevOps Tutorials for Absolute Beginner

  1. DevOps – The Introduction
    https://lnkd.in/dD79ZpJF
  2. CI CD pipeline – Devops Automation in 1 hr
    https://lnkd.in/dMQEGJBN
  3. DevOps Crash Course
    https://lnkd.in/dt5CmYSN
  4. DevOps 101
    https://lnkd.in/dhyzHVQh
  5. DevOps on AWS: Code, Build, and Test (Course 1 of 3)
    https://lnkd.in/dV6NbWRJ
  6. Free Devops Interview Questions and Answers
    https://lnkd.in/dsQu76qm
  7. DevOps Tools for Beginners: Ansible in 1 hour
    https://lnkd.in/dKgMap-r
  8. DevOps on AWS: Release and Deploy (Course 2 of 3)
    https://lnkd.in/dzQuM4Ht
  9. DevOps on AWS: Operate and Monitor (Course 3 of 3)
    https://lnkd.in/d_P9wUgg
  10. Introduction to DevOps, Habits and Practices
    https://lnkd.in/dsvQQcYj
  11. Amazon AWS Cloud IAM Hands-On
    https://lnkd.in/ddSBhiST
  12. DevOps : CI/CD with Jenkins
    https://lnkd.in/d3qvi-Az
  13. Introduction to YAML – A hands -on course
    https://lnkd.in/d4ypNfGF
  14. Kubernetes: Getting Started
    https://lnkd.in/d_JQi6wF
  15. Docker Tutorial for Beginners practical hands on -Devops
    https://lnkd.in/dbSJ-zfX
  16. Ansible for the Absolute Beginner – DevOps
    https://lnkd.in/dn_w3bsK
  17. Docker, Docker SWARM and Kubernetes crash course for DevOps
    https://lnkd.in/dFirktd3
  18. Understanding Docker in about an Hour
    https://lnkd.in/dNBvbgqJ
  19. Learn terraform by setting up Highly available wordpress
    https://lnkd.in/d-AaXDT2
  20. Use Ansible with Amazon Web Services
    https://lnkd.in/d6VfZi7d
  21. GIT Crash Course
    https://lnkd.in/ddzznGuV
  22. Maven Quick Start: A Fast Introduction to Maven by Example
    https://lnkd.in/dhVam3zC
  23. Master Amazon EC2 Basics with 10 Labs
    https://lnkd.in/d9jQ6cmN
  24. Amazon Web Services (AWS): CloudFormation
    https://lnkd.in/dAc65c-H
  25. Just enough Ansible to be dangerous
    https://lnkd.in/dXaWmX5d
  26. AZ-900 Microsoft Azure Fundamentals
    https://lnkd.in/dcdae_VZ
  27. Deploy Azure Virtual Desktop for beginners
    https://lnkd.in/dQsbzHes
  28. Apache Maven for Beginners
    https://lnkd.in/dWTK6dxn
  29. AWS Certified Solutions Architect Associate Introduction
    https://lnkd.in/d4eR5gsW
  30. Microsoft Azure fundamentals Az900 crash course
    https://lnkd.in/de5GBCEB
  31. Azure Real World Hand-on Training For Beginners.
    https://lnkd.in/dB3VM7f7
  32. Introduction to Linux Shell Scripting
    https://lnkd.in/dCb4BkvH
  33. Create a 3-Tier Application Using Azure Virtual Machines
    https://lnkd.in/dfMtuW8C
  34. AWS VPC and VPC Peering Demo
    https://lnkd.in/dxnraPHf
  35. Amazon Web Services (AWS) EC2: An Introduction
    https://lnkd.in/drUvNuFk
  36. Hosting your static website on Amazon AWS S3 service
    https://lnkd.in/dBw4RKs2
  37. Mobaxterm Powerful tools to access Linux and Unix
    https://lnkd.in/dzKTB4xw
  38. Getting started with Cloud Computing using Microsoft Azure
    https://lnkd.in/dViDqS2t
  39. Cloud Computing Fundamental
    https://lnkd.in/d9ZY_Kdq

Cheers
Osama

K8s Example

Create a Service Account

It’s super simple command

kubectl create sa webautomation -n web

Create a ClusterRole That Provides Read Access to Pods

  1. Define the ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Bind the ClusterRole to the Service Account to Only Read Pods in the web Namespace

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rb-pod-reader
  namespace: web
subjects:
- kind: ServiceAccount
  name: webautomation
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Cheers

Osama

Managing Container Storage with Kubernetes Volumes

Kubernetes volumes offer a simple way to mount external storage to containers. This lab will test your knowledge of volumes as you provide storage to some containers according to a provided specification. This will allow you to practice what you know about using Kubernetes volumes.

Create a Pod That Outputs Data to the Host Using a Volume

  • Create a Pod that will interact with the host file system by using vi maintenance-pod.yml.
apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
    containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
  • Under the basic YAML, begin creating volumes, which should be level with the containers spec:
volumes:
- name: output-vol
  hostPath:
      path: /var/data
  • In the containers spec of the basic YAML, add a line for volume mounts:
volumeMounts:
- name: output-vol
  mountPath: /output

The complete YAML will be

apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
  containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
      - name: output-vol
        mountPath: /output
  volumes:
   - name: output-vol
     hostPath:
      path: /var/data

Create a Multi-Container Pod That Shares Data Between Containers Using a Volume

  1. Create another YAML file for a shared-data multi-container Pod by using vi shared-data-pod.yml
  2. Start with the basic Pod definition and add multiple containers, where the first container will write the output.txt file and the second container will read the output.txt file:
apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']

Set up the volumes, again at the same level as containers with an emptyDir volume that only exists to share data between two containers in a simple way:

volumes:
- name: shared-vol
  emptyDir: {}

Mount that volume between the two containers by adding the following lines under command for the busybox1 container:

volumeMounts:
- name: shared-vol
  mountPath: /output

For the busybox2 container, add the following lines to mount the same volume under command to complete creating the shared file:

volumeMounts:
- name: shared-vol
  mountPath: /input

The complete file

Finish creating the multi-container Pod using kubectl create -f shared-data-pod.yml.

apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /output
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /input
    volumes:
    - name: shared-vol
    emptyDir: {}

And you can now apply the YAML file.

Cheers

Osama

K8s Types of probe 

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

Probe outcome

Each probe has one of three results:

  • Success :- The container passed the diagnostic.
  • Failure :- The container failed the diagnostic.
  • Unknown :- The diagnostic failed (no action should be taken, and the kubelet will make further checks).

Types of probe 

The kubelet can optionally perform and react to three kinds of probes on running containers:

  • livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a liveness probe, the default state is Success.

  • readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a container does not provide a readiness probe, the default state is Success.

  • startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a startup probe, the default state is Success.

When should you use a liveness probe

If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod’s restartPolicy.

If you’d like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.

When should you use a readiness probe

If you’d like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding.

If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.

If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is available. This helps you avoid directing traffic to Pods that can only respond with error messages.

If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe. However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you might prefer a readiness probe.

When should you use a startup probe

Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow.

If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.

Cheers
Osama

Passing Configuration Data to a Kubernetes Container

One of these applications is a simple Nginx web server. This server is used as part of a secure backend application, and the company would like it to be configured to use HTTP basic authentication.

This will require an htpasswd file as well as a custom Nginx config file. In order to deploy this Nginx server to the cluster with good configuration practices, you will need to load the custom Nginx configuration from a ConfigMap (this already exists) and use a Secret to store the htpasswd data.

Create a Pod with a container running the nginx:1.19.1 image. Supply a custom Nginx configuration using a ConfigMap, and populate an htpasswd file using a Secret.

Generate an htpasswd file:

htpasswd -c .htpasswd user

View the file’s contents:

cat .htpasswd

Create a Secret containing the htpasswd data:

kubectl create secret generic nginx-htpasswd --from-file .htpasswd

Delete the .htpasswd file:

rm .htpasswd

Create the Nginx Pod

Create pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.19.1
    ports:
    - containerPort: 80
    volumeMounts:
    - name: config-volume
      mountPath: /etc/nginx
    - name: htpasswd-volume
      mountPath: /etc/nginx/conf
  volumes:
  - name: config-volume
    configMap:
      name: nginx-config
  - name: htpasswd-volume
    secret:
      secretName: nginx-htpasswd

View the existing ConfigMap:

kubectl get cm

we need to create the configMap for nginx-config:-

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
 nginx.conf: |
   user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;

     events {
       worker_connections  1024;
      }
     http {
      server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        auth_basic "Secure Site";
        auth_basic_user_file conf/.htpasswd;
      }
     }

Get more info about nginx-config:

kubectl describe cm nginx-config

Apply the pod

kubectl apply -f pod.yml

Within the existing busybox pod, without using credentials, verify everything is working:

kubectl exec busybox -- curl <NGINX_POD_IP>

We’ll see HTML for the 401 Authorization Required page — but this is a good thing, as it means our setup is working as expected.

curl: (6) Couldn't resolve host 'user'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   179  100   179    0     0  62174      0 --:--:-- --:--:-- --:--:-- 89500
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>

Run the same command again using credentials (including the password you created at the beginning of the lab) to verify everything is working:

kubectl exec busybox -- curl -u user:<PASSWORD> <NGINX_POD_IP>
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   612  100   612    0     0  48846      0 --:--:-- --:--:-- --:--:-- 51000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Cheers

Osama

Upgrade k8s using kubeadm

First, upgrade the control plane node

Drain the control plane node.

kubectl drain master-node-name --ignore-daemonsets

Upgrade kubeadm.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=version

kubeadm version

Plan the upgrade.

sudo kubeadm upgrade plan v-version (for example v1.24.2)

Upgrade the control plane components.

sudo kubeadm upgrade apply v1.22.2

Upgrade kubelet and kubectl on the control plane node.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=version 

Restart kubelet.

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Uncordon the control plane node.

kubectl uncordon master-node-name

Verify that the control plane is working

Note:- you should not perform upgrades on all worker nodes at the same time. Make sure enough nodes are available at any given time to provide uninterrupted service.

Worker nodes

Run the following on the control plane node to drain worker node 1:

kubectl drain worker1-node-name --ignore-daemonsets --force

Log in to the first worker node, then Upgrade kubeadm.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=version

Upgrade the kubelet configuration on the worker node.

sudo kubeadm upgrade node

Upgrade kubelet and kubectl on the worker node.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=verion

Restart kubelet.

sudo systemctl daemon-reload 
sudo systemctl restart kubelet

From the control plane node, uncordon worker node 1.

kubectl uncordon worker1-node-name

Repeat the upgrade process for worker nodes.

Cheers

Osama