Scaling Pods in Kubernetes

Continue to pervious post of Configure Kubernetes on my blog.

This post will discuss how to scale the pods, I will assume the Kubernetes installed if not back to the above post.

If you did these steps below , you can skip

Initialize the cluster

kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.11.3

As mentioned the command will generate commands like the picture.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Install Flannel

Flannel is an open-source virtual network project managed by CoreOS network designed for Kubernetes. Each host in a flannel cluster runs an agent called flanneld . It assigns each host a subnet, which acts as the IP address pool for containers running on the host.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
  • Create deployment
vi deployment.yml
apiVersion: apps/v1

kind: Deployment

metadata:

  name: httpd-deployment

  labels:

    app: httpd

spec:

  replicas: 3

  selector:

    matchLabels:

      app: httpd

  template:

    metadata:

      labels:

        app: httpd

    spec:

      containers:

      - name: httpd

        image: httpd:latest

        ports:

        - containerPort: 80
  • Spin up the deployment
kubectl create -f deployment.yml

  • Create the service
vim service.yml
kind: Service

apiVersion: v1

metadata:

  name: service-deployment

spec:

  selector:

    app: httpd

  ports:

  - protocol: TCP

    port: 80

    targetPort: 80

  type: NodePort
kubectl create -f service.yml
  • Scale the deployment up to 5 replicas.
vi deployment.yml

Change the number of replicas to 5:

spec: replicas: 5
  • Apply the changes:
kubectl apply -f deployment.yml

Enjoy

Hope it’s useful

Osama

Setting up a Kubernetes Cluster with Docker – CentOS

Moving to Docker container series blog post, I choose to continue with Kubernetes and discuss it more start with configuration and installation.

This configuration discuss on-premise side and to do that you have at least 2 servers

Serverpurposedescription
The Masternode which controls and manages a set of worker nodes (workloads runtime) and resembles a cluster in Kubernetes. A master node has the following components to help manage worker nodes: … Kube-Controller-Manager, which runs a set of controllers for the running cluster.
The worker nodeNode is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. … Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.

Configure The Kubernetes cluster

  • On all nodes, add the Kubernetes repo to /etc/yum.repos.d:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
  • Disable SELinux:
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  • Install Kubernetes
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  • Enable and start kubelet
sudo systemctl enable --now kubelet
  • From Node 1 (Master) , initialize the controller node, and set the code network CIDR to 10.244.0.0/16 or depends on your IP range :
kubeadm init --pod-network-cidr=10.244.0.0/16
  • From Node 1 (Master), check the status of your cluster:
 docker ps -a

Repeat this step on the worker nodes. Can the worker nodes see the cluster

  • Once you are done, the init command will create a commands for you , you needs to run them or you will have permission issues.
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Copy the kubeadm join command, then paste and run it in your worker nodes terminal windows.

  • From the worker nodes, verify that they can see the cluster
docker ps -a
  • From Node 1 (Master), check the status of the nodes
 kubectl get nodes

Now, Kubernetes installed but it’s empty to have pods or services the next will be for you, it can be change depends on your application type but it’s Just for testing to show the reader how it’s goes.

  • Install flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Create POD
vim pod.yml
apiVersion: v1

kind: Pod

metadata:

  name: nginx-pod-demo

  labels:

    app: nginx-demo

spec:

  containers:

  - image: nginx:latest

    name: nginx-demo

    ports:

    -  containerPort: 80

    imagePullPolicy: Always

  • Create the pod
 kubectl create -f pod.yml
  • Check the status of the pod
kubectl get pods
  • Create Services
vim service.yml
apiVersion: v1

kind: Service

metadata:

  name: service-demo

spec:

  selector:

    app: nginx-demo

  ports:

  - protocol: TCP

    port: 80

    targetPort: 80

  type: NodePort
  • Create the service
kubectl apply -f service.yml
  • Run the following command to view the service
 kubectl get services

Take note of the service-demo port number.

In a web browser, navigate to the public IP address for a server in the cluster, and verify connectivity:

<PUBLIC_IP_ADDRESS>:<SERVICE_DEMO_PORT_NUMBER>

Enjoy the automation🤗

Osama

Using Grafana with Prometheus for Alerting and Monitoring

This post continue to the pervious one which discussing “Monitor the Container using Prometheus” To use Grafana we need to do the following :-

  • The first thing we need to do is create a daemon.json file for Docker, Once /etc/docker/daemon.json is open in the vi text editor, add the following:
{ "metrics-addr" : "0.0.0.0:9323", "experimental" : true }

  • Restart The docker
systemctl restart docker

  • Update the firewall rules to communicate with Prometheus Server
firewall-cmd --zone=public --add-port=9323/tcp

Update Promotheus

  • Edit the Prometheus from the pervious post to be like the below , vi prometheus.yml
scrape_configs:

  - job_name: prometheus

    scrape_interval: 5s

    static_configs:

    - targets:

      - prometheus:9090

      - node-exporter:9100

      - pushgateway:9091

      - cadvisor:8080

 

  - job_name: docker

    scrape_interval: 5s

    static_configs:

    - targets:

      - <PRIVATE_IP_ADDRESS>:9323
  • Edit Docker-compose also from the pervious post

vi ~/docker-compose.yml

prometheus:

    image: prom/prometheus:latest

    container_name: prometheus

    ports:

      - 9090:9090

    command:

      - --config.file=/etc/prometheus/prometheus.yml

    volumes:

      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro

    depends_on:

      - cadvisor

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      - 8080:8080

    volumes:

      - /:/rootfs:ro

      - /var/run:/var/run:rw

      - /sys:/sys:ro

      - /var/lib/docker/:/var/lib/docker:ro

  pushgateway:

    image: prom/pushgateway

    container_name: pushgateway

    ports:

      - 9091:9091

  node-exporter:

    image: prom/node-exporter:latest

    container_name: node-exporter

    restart: unless-stopped

    expose:

      - 9100

  grafana:

    image: grafana/grafana

    container_name: grafana

    ports:

      - 3000:3000

    environment:

      - GF_SECURITY_ADMIN_PASSWORD=password

    depends_on:

      - prometheus

      - cadvisor
  • Run Docker compose command
docker-compose up -d

Check Grafana if it’s working by

http://PUBLIC_IP_ADDRESS:3000

Once you access you have to do the following to connect Grafana with Prometheus

Adding DataSource

In the Grafana Home Dashboard, click the Add data source icon. For Name, type “Prometheus”. Click into the Type field, and select Prometheusfrom the dropdown. Under URL, select http://localhost:9090. (But we’re going to change this in a moment.) copy the private IP address of your server. Then, replace “localhost” in the URL with the private IP address. (It should look like this: http://PRIVATE_IP_ADDRESS:9090).

Add the Docker Dashboard to Grafana

lick the plus sign (+) on the left side of the Grafana interface, and click Import. Then, Open the JSON file Uploaded to my GitHub here. Copy the contents of the file to your clipboard.

We now have our Grafana visualization. In the upper right corner, click on Refresh every 5m and select Last 5 minutes.

Final Results

Enjoy

Osama

Dockerize a Flask Application

The Flask Application uploaded to my GitHub Here

I will dockerize the above application and show you the steps to do that

Let’s Start 🤞

  • First will add some files i don’t want to Dockerignore file
vim .dockerignore

.dockerignore

Dockerfile

.gitignore

Pipfile.lock

migrations/
  • Write the dockerfile, which is already included to the above Repo vim Dockerfile

FROM python:3

 

ENV PYBASE /pybase

ENV PYTHONUSERBASE $PYBASE

ENV PATH $PYBASE/bin:$PATH

RUN pip install pipenv

WORKDIR /tmp

COPY Pipfile .

RUN pipenv lock

RUN PIP_USER=1 PIP_IGNORE_INSTALLED=1 pipenv install -d --system --ignore-pipfile

COPY . /app/notes

 

WORKDIR /app/notes

EXPOSE 80

CMD ["flask", "run", "--port=80", "--host=0.0.0.0"]
  • Build and Test
docker build -t notesapp:0.1 .

docker run --rm -it --network notes -v /home/Osama/notes/migrations:/app/notes/migrations notesapp:0.1 bash

The above commands build and run the container, once you are inside the container configure the database

  • Configure Database
flask db init

flask db migrate

flask db upgrade
  • Run and Upgrade
docker run --rm -it --network notes -p 80:80 notesapp:0.1
  1. In a web browser, navigate to the public IP address for the server, and log in to your account.
  2. Verify that you can create a new note.

Perfect , we are done now

Enjoy the learning 👍

Osama

How to enable docker logging

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.

STEPS :-

Configure Docker to user Syslog

  • vim /etc/rsyslog.conf
In the file editor, uncomment the two lines under `Provides UDP syslog reception` by removing `#`.

#ModLoad imudp

#UDPServerRun 514

Then

systemctl start rsyslog

  • Now that syslog is running, let’s configure Docker to use syslog as the default logging driver. We’ll do this by creating a file called daemon.json
sudo mkdir /etc/docker

vi /etc/docker/daemon.json

{ "log-driver":

"syslog",

"log-opts": {

"syslog-address": "udp://<PRIVATE_IP>:514" }

}

Then

systemctl start docker

Time to use for docker

For example , first method

docker container run -d --name syslog-logging httpd

Check by

docker logs syslog-logging

Or

tail /var/log/messages

second way to use the enable logging

docker container run -d --name json-logging --log-driver json-file httpd

Check

docker logs json-logging

Docker power 👌

Enjoy

Osama

Storing Container Data in Azure Blob Storage

This time how to store your data to Azure Blog Storage 👍

Let’s start

Configuration

  • Obtain the Azure login credentials
az login
  1. Copy the code provided by the command.
  2. Open a browser and navigate to https://microsoft.com/devicelogin.
  3. Enter the code copied in a previous step and click Next.
  4. Use the login credentials from the lab page to finish logging in.
  5. Switch back to the terminal and wait for the confirmation.

Storage

  • Find the name of the Storage account
 az storage account list | grep name | head -1

Copy the name of the Storage account to the clipboard.

  • Export the Storage account name
 export AZURE_STORAGE_ACCOUNT=<COPIED_STORAGE_ACCOUNT_NAME>
  • Retrieve the Storage access key
az storage account keys list --account-name=$AZURE_STORAGE_ACCOUNT

Copy the key1 “value” for later use.

  • Export the key value
export AZURE_STORAGE_ACCESS_KEY=<KEY1_VALUE>
  • Install blobfuse
sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm
sudo yum install blobfuse fuse -y
  • Modify the fuse.conf configuration file
sudo sed -ri 's/# user_allow_other/user_allow_other/' /etc/fuse.conf

Use Azure Blob container Storage

  • Create necessary directories
sudo mkdir -p /mnt/Osama /mnt/blobfusetmp
  • Change ownership of the directories
sudo chown cloud_user /mnt/Osama/ /mnt/blobfusetmp/
  • Mount the Blob Storage from Azure
blobfuse /mnt/Osama --container-name=website --tmp-path=/mnt/blobfusetmp -o allow_other
  • Copy What you want to the files into the Blob Storage container for example website files.
 cp -r ~/web/* /mnt/Osama/
  • Verify the copy worked
ll /mnt/Osama/
  • Verify the files made it to Azure Blob Storage
az storage blob list -c website --output table
  • Finally, Run a Docker container using the azure blob storage
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/Osama,target=/usr/local/apache2/htdocs,readonly httpd:2.4

Enjoy 🎉😁

Osama

Docker compose example

What is docker compose ?

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Benefits of Docker Compose

  • Single host deployment – This means you can run everything on a single piece of hardware
  • Quick and easy configuration – Due to YAML scripts
  • High productivity – Docker Compose reduces the time it takes to perform tasks
  • Security – All the containers are isolated from each other, reducing the threat landscape

Just quick post with example about docker-compose file to show you how much powerful this instead of running docker compose

  1. create file called docker-compose.yml
version: '3'
services:
  ghost:
    image: ghost:1-alpine
    container_name: ghost-blog
    restart: always
    ports:
      - 80:2368
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: P4sSw0rd0!
      database__connection__database: ghost
    volumes:
      - ghost-volume:/var/lib/ghost
    depends_on:
      - mysql

  mysql:
    image: mysql:5.7
    container_name: ghost-db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: P4sSw0rd0!
    volumes:
      - mysql-volume:/var/lib/mysql

volumes:
  ghost-volume:
  mysql-volume:

2. Run

docker-compose up -d

Finished

Osama

Inspect Docker layer

In this post , I will share one of the ways showing how to know what is the size of the image plus the layer of the image

To do that follow the steps

  • Create Linux variable one for the layer the other one for the size with the following
export showLayers='{{ range .RootFS.Layers }}{{ println . }}{{end}}'

export showSize='{{ .Size }}'

  • Check The size like the below
docker inspect -f "$showSize" <Container-name:tag>

  • Check the Layer like the below
docker inspect -f "$showLayers" <Container-name>

Regards

Osama

Docker & kubernetes example – Full project for free

Okay, I love to post free examples/projects on my github from while to while, i choose docker and kubernetes this time, the project idea it’s very nice and easy to implement.

What this project do ?

This can be a simple web app that reads a ‘hello world’ string from the MySQL database.Run a database app. Data volume should be persistent.Application from step 1 needs to discover the database from step 2 using Kubernetes native features.
Database credentials should NOT be hardcoded in application or helm chart code.The application should be accessible from the outside of Kubernetes.Create a helm chart which implements all these steps

  • Create an application that connects to a database, reads some data, and returns this data upon HTTP request, This can be a simple web app that reads a ‘hello world’ string from the MySQL database.
  • Run a database app. Data volume should be persistent.
  • Application from step 1 needs to discover the database from step 2 using Kubernetes native features, Database credentials should NOT be hardcoded in application or helm chart code.
  • The application should be accessible from the outside of Kubernetes.
  • Create a helm chart which implements all these steps

I Choose to use Java as programing language because the springboot framework it’s already defined and easy to use.

Please follow the readme file and everything will working fine without any issue, if you have any question comment below and i will answer

GitHub Link HERE

Enjoy the free learning

Osama