BLOG

Monitoring Containers with Prometheus

Using Prometheus, you can monitor application metrics like throughput (TPS) and response times of the Kafka load generator (Kafka producer), Kafka consumer, and Cassandra client. Node exporter can be used for monitoring of host hardware and kernel metrics.

Create a prometheus.yml File

  • In root’s home directory, create prometheus.yml
vi prometheus.yml

  • We’ve got to stick a few configuration lines in here. When we’re done, it should look like this
scrape_configs:

- job_name: cadvisor

  scrape_interval: 5s

  static_configs:

  - targets:

    - cadvisor:8080
  • Create a docker-compose.yml file
version: '3'

services:

  prometheus:

    image: prom/prometheus:latest

    container_name: prometheus

    ports:

      - 9090:9090

    command:

      - --config.file=/etc/prometheus/prometheus.yml

    volumes:

      - ./prometheus.yml:/etc/prometheus/prometheus.yml

    depends_on:

      - cadvisor

    

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      - 8080:8080

    volumes:

      - /:/rootfs:ro

      - /var/run:/var/run:rw

      - /sys:/sys:ro

      - /var/lib/docker:/var/lib/docker:ro
  • In order to stand up the environment, we’ll run this
docker-compose up -d

And to see if everything stood up properly, let’s run a quick docker ps. The output should show four containers: prometheus, cadvisor, nginx, and redis.

Let’s so see in a web browser as well. and browse to it, using the correct port number: http://<IP_ADDRESS&gt;:9090/graph/

investigating CAdvisor

In a browser, navigate to http:// <IP_ADDRESS> :8080/containers/. Take a peek around, then change the URL to one of our container names (like nginx) so we’re at http://:8080/docker/nginx/.

If we run docker stats, we’re going to get some output that looks a lot like docker ps, but this stays open and reports what’s going on as far as the various aspects (CPU and memory usage, etc.) of our containers.

docker stats --format "table {{.Name}} {{.ID}} {{.MemUsage}} {{.CPUPerc}}"

Regards 🤞😁

Osama

Dockerize a Flask Application

The Flask Application uploaded to my GitHub Here

I will dockerize the above application and show you the steps to do that

Let’s Start 🤞

  • First will add some files i don’t want to Dockerignore file
vim .dockerignore

.dockerignore

Dockerfile

.gitignore

Pipfile.lock

migrations/
  • Write the dockerfile, which is already included to the above Repo vim Dockerfile

FROM python:3

 

ENV PYBASE /pybase

ENV PYTHONUSERBASE $PYBASE

ENV PATH $PYBASE/bin:$PATH

RUN pip install pipenv

WORKDIR /tmp

COPY Pipfile .

RUN pipenv lock

RUN PIP_USER=1 PIP_IGNORE_INSTALLED=1 pipenv install -d --system --ignore-pipfile

COPY . /app/notes

 

WORKDIR /app/notes

EXPOSE 80

CMD ["flask", "run", "--port=80", "--host=0.0.0.0"]
  • Build and Test
docker build -t notesapp:0.1 .

docker run --rm -it --network notes -v /home/Osama/notes/migrations:/app/notes/migrations notesapp:0.1 bash

The above commands build and run the container, once you are inside the container configure the database

  • Configure Database
flask db init

flask db migrate

flask db upgrade
  • Run and Upgrade
docker run --rm -it --network notes -p 80:80 notesapp:0.1
  1. In a web browser, navigate to the public IP address for the server, and log in to your account.
  2. Verify that you can create a new note.

Perfect , we are done now

Enjoy the learning 👍

Osama

How to enable docker logging

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.

STEPS :-

Configure Docker to user Syslog

  • vim /etc/rsyslog.conf
In the file editor, uncomment the two lines under `Provides UDP syslog reception` by removing `#`.

#ModLoad imudp

#UDPServerRun 514

Then

systemctl start rsyslog

  • Now that syslog is running, let’s configure Docker to use syslog as the default logging driver. We’ll do this by creating a file called daemon.json
sudo mkdir /etc/docker

vi /etc/docker/daemon.json

{ "log-driver":

"syslog",

"log-opts": {

"syslog-address": "udp://<PRIVATE_IP>:514" }

}

Then

systemctl start docker

Time to use for docker

For example , first method

docker container run -d --name syslog-logging httpd

Check by

docker logs syslog-logging

Or

tail /var/log/messages

second way to use the enable logging

docker container run -d --name json-logging --log-driver json-file httpd

Check

docker logs json-logging

Docker power 👌

Enjoy

Osama

Storing Container Data in Azure Blob Storage

This time how to store your data to Azure Blog Storage 👍

Let’s start

Configuration

  • Obtain the Azure login credentials
az login
  1. Copy the code provided by the command.
  2. Open a browser and navigate to https://microsoft.com/devicelogin.
  3. Enter the code copied in a previous step and click Next.
  4. Use the login credentials from the lab page to finish logging in.
  5. Switch back to the terminal and wait for the confirmation.

Storage

  • Find the name of the Storage account
 az storage account list | grep name | head -1

Copy the name of the Storage account to the clipboard.

  • Export the Storage account name
 export AZURE_STORAGE_ACCOUNT=<COPIED_STORAGE_ACCOUNT_NAME>
  • Retrieve the Storage access key
az storage account keys list --account-name=$AZURE_STORAGE_ACCOUNT

Copy the key1 “value” for later use.

  • Export the key value
export AZURE_STORAGE_ACCESS_KEY=<KEY1_VALUE>
  • Install blobfuse
sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm
sudo yum install blobfuse fuse -y
  • Modify the fuse.conf configuration file
sudo sed -ri 's/# user_allow_other/user_allow_other/' /etc/fuse.conf

Use Azure Blob container Storage

  • Create necessary directories
sudo mkdir -p /mnt/Osama /mnt/blobfusetmp
  • Change ownership of the directories
sudo chown cloud_user /mnt/Osama/ /mnt/blobfusetmp/
  • Mount the Blob Storage from Azure
blobfuse /mnt/Osama --container-name=website --tmp-path=/mnt/blobfusetmp -o allow_other
  • Copy What you want to the files into the Blob Storage container for example website files.
 cp -r ~/web/* /mnt/Osama/
  • Verify the copy worked
ll /mnt/Osama/
  • Verify the files made it to Azure Blob Storage
az storage blob list -c website --output table
  • Finally, Run a Docker container using the azure blob storage
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/Osama,target=/usr/local/apache2/htdocs,readonly httpd:2.4

Enjoy 🎉😁

Osama

Docker compose example

What is docker compose ?

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Benefits of Docker Compose

  • Single host deployment – This means you can run everything on a single piece of hardware
  • Quick and easy configuration – Due to YAML scripts
  • High productivity – Docker Compose reduces the time it takes to perform tasks
  • Security – All the containers are isolated from each other, reducing the threat landscape

Just quick post with example about docker-compose file to show you how much powerful this instead of running docker compose

  1. create file called docker-compose.yml
version: '3'
services:
  ghost:
    image: ghost:1-alpine
    container_name: ghost-blog
    restart: always
    ports:
      - 80:2368
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: P4sSw0rd0!
      database__connection__database: ghost
    volumes:
      - ghost-volume:/var/lib/ghost
    depends_on:
      - mysql

  mysql:
    image: mysql:5.7
    container_name: ghost-db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: P4sSw0rd0!
    volumes:
      - mysql-volume:/var/lib/mysql

volumes:
  ghost-volume:
  mysql-volume:

2. Run

docker-compose up -d

Finished

Osama

Inspect Docker layer

In this post , I will share one of the ways showing how to know what is the size of the image plus the layer of the image

To do that follow the steps

  • Create Linux variable one for the layer the other one for the size with the following
export showLayers='{{ range .RootFS.Layers }}{{ println . }}{{end}}'

export showSize='{{ .Size }}'

  • Check The size like the below
docker inspect -f "$showSize" <Container-name:tag>

  • Check the Layer like the below
docker inspect -f "$showLayers" <Container-name>

Regards

Osama

Storing container data in AWS s3

In this Post I will discuss how to Use S3 Bucket in Docker containers, Let’s Start

Configuration and Installation

  • Install the awscli, while checking if there are any versions currently installed, and not stopping any user processes
pip install --upgrade --user awscli
  • Configure the CLI:
 aws configure
  1. Enter the following:
    • AWS Access Key ID: <ACCESS_KEY_ID>
    • AWS Secret Access Key: <SECRET_ACCESS_KEY>
    • Default region name: <Region>
    • Default output format: json
  • Copy the CLI configuration to the root user
sudo cp -r ~/.aws /root
  • Install the s3fs package
sudo yum install s3fs-fuse -y

Configure Bucket

  • Create a mount point for the s3 bucket
sudo mkdir /mnt/application-website
  • Export the bucket name
export BUCKET=<S3_BUCKET_NAME>
  • Mount the S3 bucket
sudo s3fs $BUCKET /mnt/application-website -o allow_other -o default_acl=public-read -o use_cache=/tmp/s3fs
  • Verify that the bucket was mounted successfully
ll /mnt/application-website
  • Copy the website files to the s3 bucket
 cp -r ~/application-website/web/* /mnt/application-website
  • Verify the files are in the folder
ll /mnt/application-website
  • Verify the files are in the s3 bucket
aws s3 ls s3://$BUCKET

Run container using volume s3

  • Run an httpd container using the S3 bucket
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/application-app,target=/usr/local/apache2/htdocs,readonly httpd:2.4
  • In a web browser, verify connectivity to the container
You can check the application <Server-Public-IP:80>

Regards

Osama