BLOG

How to enable docker logging

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.

STEPS :-

Configure Docker to user Syslog

  • vim /etc/rsyslog.conf
In the file editor, uncomment the two lines under `Provides UDP syslog reception` by removing `#`.

#ModLoad imudp

#UDPServerRun 514

Then

systemctl start rsyslog

  • Now that syslog is running, let’s configure Docker to use syslog as the default logging driver. We’ll do this by creating a file called daemon.json
sudo mkdir /etc/docker

vi /etc/docker/daemon.json

{ "log-driver":

"syslog",

"log-opts": {

"syslog-address": "udp://<PRIVATE_IP>:514" }

}

Then

systemctl start docker

Time to use for docker

For example , first method

docker container run -d --name syslog-logging httpd

Check by

docker logs syslog-logging

Or

tail /var/log/messages

second way to use the enable logging

docker container run -d --name json-logging --log-driver json-file httpd

Check

docker logs json-logging

Docker power 👌

Enjoy

Osama

Storing Container Data in Azure Blob Storage

This time how to store your data to Azure Blog Storage 👍

Let’s start

Configuration

  • Obtain the Azure login credentials
az login
  1. Copy the code provided by the command.
  2. Open a browser and navigate to https://microsoft.com/devicelogin.
  3. Enter the code copied in a previous step and click Next.
  4. Use the login credentials from the lab page to finish logging in.
  5. Switch back to the terminal and wait for the confirmation.

Storage

  • Find the name of the Storage account
 az storage account list | grep name | head -1

Copy the name of the Storage account to the clipboard.

  • Export the Storage account name
 export AZURE_STORAGE_ACCOUNT=<COPIED_STORAGE_ACCOUNT_NAME>
  • Retrieve the Storage access key
az storage account keys list --account-name=$AZURE_STORAGE_ACCOUNT

Copy the key1 “value” for later use.

  • Export the key value
export AZURE_STORAGE_ACCESS_KEY=<KEY1_VALUE>
  • Install blobfuse
sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm
sudo yum install blobfuse fuse -y
  • Modify the fuse.conf configuration file
sudo sed -ri 's/# user_allow_other/user_allow_other/' /etc/fuse.conf

Use Azure Blob container Storage

  • Create necessary directories
sudo mkdir -p /mnt/Osama /mnt/blobfusetmp
  • Change ownership of the directories
sudo chown cloud_user /mnt/Osama/ /mnt/blobfusetmp/
  • Mount the Blob Storage from Azure
blobfuse /mnt/Osama --container-name=website --tmp-path=/mnt/blobfusetmp -o allow_other
  • Copy What you want to the files into the Blob Storage container for example website files.
 cp -r ~/web/* /mnt/Osama/
  • Verify the copy worked
ll /mnt/Osama/
  • Verify the files made it to Azure Blob Storage
az storage blob list -c website --output table
  • Finally, Run a Docker container using the azure blob storage
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/Osama,target=/usr/local/apache2/htdocs,readonly httpd:2.4

Enjoy 🎉😁

Osama

Docker compose example

What is docker compose ?

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Benefits of Docker Compose

  • Single host deployment – This means you can run everything on a single piece of hardware
  • Quick and easy configuration – Due to YAML scripts
  • High productivity – Docker Compose reduces the time it takes to perform tasks
  • Security – All the containers are isolated from each other, reducing the threat landscape

Just quick post with example about docker-compose file to show you how much powerful this instead of running docker compose

  1. create file called docker-compose.yml
version: '3'
services:
  ghost:
    image: ghost:1-alpine
    container_name: ghost-blog
    restart: always
    ports:
      - 80:2368
    environment:
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: P4sSw0rd0!
      database__connection__database: ghost
    volumes:
      - ghost-volume:/var/lib/ghost
    depends_on:
      - mysql

  mysql:
    image: mysql:5.7
    container_name: ghost-db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: P4sSw0rd0!
    volumes:
      - mysql-volume:/var/lib/mysql

volumes:
  ghost-volume:
  mysql-volume:

2. Run

docker-compose up -d

Finished

Osama

Inspect Docker layer

In this post , I will share one of the ways showing how to know what is the size of the image plus the layer of the image

To do that follow the steps

  • Create Linux variable one for the layer the other one for the size with the following
export showLayers='{{ range .RootFS.Layers }}{{ println . }}{{end}}'

export showSize='{{ .Size }}'

  • Check The size like the below
docker inspect -f "$showSize" <Container-name:tag>

  • Check the Layer like the below
docker inspect -f "$showLayers" <Container-name>

Regards

Osama

Storing container data in AWS s3

In this Post I will discuss how to Use S3 Bucket in Docker containers, Let’s Start

Configuration and Installation

  • Install the awscli, while checking if there are any versions currently installed, and not stopping any user processes
pip install --upgrade --user awscli
  • Configure the CLI:
 aws configure
  1. Enter the following:
    • AWS Access Key ID: <ACCESS_KEY_ID>
    • AWS Secret Access Key: <SECRET_ACCESS_KEY>
    • Default region name: <Region>
    • Default output format: json
  • Copy the CLI configuration to the root user
sudo cp -r ~/.aws /root
  • Install the s3fs package
sudo yum install s3fs-fuse -y

Configure Bucket

  • Create a mount point for the s3 bucket
sudo mkdir /mnt/application-website
  • Export the bucket name
export BUCKET=<S3_BUCKET_NAME>
  • Mount the S3 bucket
sudo s3fs $BUCKET /mnt/application-website -o allow_other -o default_acl=public-read -o use_cache=/tmp/s3fs
  • Verify that the bucket was mounted successfully
ll /mnt/application-website
  • Copy the website files to the s3 bucket
 cp -r ~/application-website/web/* /mnt/application-website
  • Verify the files are in the folder
ll /mnt/application-website
  • Verify the files are in the s3 bucket
aws s3 ls s3://$BUCKET

Run container using volume s3

  • Run an httpd container using the S3 bucket
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/application-app,target=/usr/local/apache2/htdocs,readonly httpd:2.4
  • In a web browser, verify connectivity to the container
You can check the application <Server-Public-IP:80>

Regards

Osama

AWS ECS Project

This is another DevOps Project, the idea of this project like the following:-

sample django web application on with the following specs:

  • app should be production ready taking into consideration things such as scalability, availability & security.
  • The infrastructure to run this application is up to you but it should be automated via terraform or cloud formation. Infrastructure well architected framework will be used to evaluate the infrastructure as a whole.
  • CI/CD Pipeline
  • harden the application for a production ready environment.

The complete Project uploaded to my GitHub HERE.

Thank you

Enjoy the automation

Osama

Case study for software architect

Problem Description


We have two separate applications that we would like to integrate together. One is a WYSIWYG application for generating static websites. The other is an admin application for managing an online shopping site. We would like to be able to use the features of the Website Builder to design pages in the Webshop. In addition, we would also like to be able to manage product details (name, price, images, etc.) while updating Webshop pages in the Website Builder.

Website Builder Details

The Website Builder is a single page app written in React. It is mostly served by a monolithic backend with a few services for select features. The app follows a component-driven architecture using Redux for application state management. Each static page in a user’s website is composed of components. Each component is responsible for rendering the view within its container and for supplying the callbacks for displaying its settings panel. The settings panel is unique per component but may share various individual controls for certain
settings (eg: background color, fonts).


When the user is ready to publish their site, the publication service will generate static assets for each page. The Webshop is one component in the Website Builder. When a Webshop is included on a page, a JavaScript snippet is included in the generated HTML.

Webshop Details


The Webshop has 2 parts: the admin portion is a single page app written in KnockoutJS. It is in the process of being rewritten in React. The second portion is the public-facing shop front, also a Knockout application written in KnockoutJS. The admin application lists products, orders, and other management details. The Webshop backend is quite similar to the Website Builder – monolithic aside from a few minor services for certain features.

The documentation is HERE

Cheers


Osama