Docker compose example

What is docker compose ?

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Benefits of Docker Compose

  • Single host deployment – This means you can run everything on a single piece of hardware
  • Quick and easy configuration – Due to YAML scripts
  • High productivity – Docker Compose reduces the time it takes to perform tasks
  • Security – All the containers are isolated from each other, reducing the threat landscape

Just quick post with example about docker-compose file to show you how much powerful this instead of running docker compose

  1. create file called docker-compose.yml
version: '3'
    image: ghost:1-alpine
    container_name: ghost-blog
    restart: always
      - 80:2368
      database__client: mysql
      database__connection__host: mysql
      database__connection__user: root
      database__connection__password: P4sSw0rd0!
      database__connection__database: ghost
      - ghost-volume:/var/lib/ghost
      - mysql

    image: mysql:5.7
    container_name: ghost-db
    restart: always
      - mysql-volume:/var/lib/mysql


2. Run

docker-compose up -d



Inspect Docker layer

In this post , I will share one of the ways showing how to know what is the size of the image plus the layer of the image

To do that follow the steps

  • Create Linux variable one for the layer the other one for the size with the following
export showLayers='{{ range .RootFS.Layers }}{{ println . }}{{end}}'

export showSize='{{ .Size }}'

  • Check The size like the below
docker inspect -f "$showSize" <Container-name:tag>

  • Check the Layer like the below
docker inspect -f "$showLayers" <Container-name>



Storing container data in AWS s3

In this Post I will discuss how to Use S3 Bucket in Docker containers, Let’s Start

Configuration and Installation

  • Install the awscli, while checking if there are any versions currently installed, and not stopping any user processes
pip install --upgrade --user awscli
  • Configure the CLI:
 aws configure
  1. Enter the following:
    • AWS Access Key ID: <ACCESS_KEY_ID>
    • AWS Secret Access Key: <SECRET_ACCESS_KEY>
    • Default region name: <Region>
    • Default output format: json
  • Copy the CLI configuration to the root user
sudo cp -r ~/.aws /root
  • Install the s3fs package
sudo yum install s3fs-fuse -y

Configure Bucket

  • Create a mount point for the s3 bucket
sudo mkdir /mnt/application-website
  • Export the bucket name
  • Mount the S3 bucket
sudo s3fs $BUCKET /mnt/application-website -o allow_other -o default_acl=public-read -o use_cache=/tmp/s3fs
  • Verify that the bucket was mounted successfully
ll /mnt/application-website
  • Copy the website files to the s3 bucket
 cp -r ~/application-website/web/* /mnt/application-website
  • Verify the files are in the folder
ll /mnt/application-website
  • Verify the files are in the s3 bucket
aws s3 ls s3://$BUCKET

Run container using volume s3

  • Run an httpd container using the S3 bucket
docker run -d --name web1 -p 80:80 --mount type=bind,source=/mnt/application-app,target=/usr/local/apache2/htdocs,readonly httpd:2.4
  • In a web browser, verify connectivity to the container
You can check the application <Server-Public-IP:80>



AWS ECS Project

This is another DevOps Project, the idea of this project like the following:-

sample django web application on with the following specs:

  • app should be production ready taking into consideration things such as scalability, availability & security.
  • The infrastructure to run this application is up to you but it should be automated via terraform or cloud formation. Infrastructure well architected framework will be used to evaluate the infrastructure as a whole.
  • CI/CD Pipeline
  • harden the application for a production ready environment.

The complete Project uploaded to my GitHub HERE.

Thank you

Enjoy the automation


DevOps Project – Complete auto deployment and IAAC

The following project having these requriment:-

  • I want to have a CI/CD pipeline for my application the pipeline must build and test the application code base.
  • The pipeline must build and push a Docker container ready to use.
  • The pipeline must deploy the application across different environments on the target infrastructure.
  • Separate the backend and the frontend in different pipelines and containers.

Another Things to add to this project which is the follwong :-

  • the infrastructure must be created on the cloud, for the purpose of the assignment any public cloud can be used.
  • The deployment pipeline must use infrastructure as code using Terraform
  • The delivered infrastructure must be monitored and audited.
  • The delivered infrastructure must allow multiple personal accounts.
  • The delivered infrastructure must be able to scale automatically.
  • Modify the application to make use of real database running on the cloud, instead of the in-memory database.

The Link for the Project HERE

Enjoy the automation


ansible provisioning for instance (Cloud Version) Using Pipelines

Imagine you are having multiple instances and you want to change something, if you will do this manually it will take time from you why not to automte the process ?

I upladed one of the projects to automate the process, this will allow to automate the simplest things for example new employee joined and you need to add his SSH key to your instances (You can even choose which VM you want to him/her to acces) , just add the key in the roles and configure the pipeline on your rep and the code will run Automtically.

I uploaded the project on my github HERE.


Enjoy the power of automation.