Configure LB Using Nginx

Happy New Year Everyone

This is the first blog post for the 2020, i Wish everyone will have healthy and wonderful year, may your dreams come true.

I post recently or let’s say Last Year about Full automation project using DevOps tools, and i didn’t to be honest except that much download and questions on that post. you can read it from here.

I decided to create new project but this time to show the power of DevOps and how to use it more in your daily Job, task or even configuration.

The idea of this project like the following:-

  • You have two code, one Go based application, one Java-based application. Both are providing an HTTP service with the same endpoints.
  • The Endpoints which is :-
RouteDescription
/A static site. Should not appear in the final setup as it is but redirect to /hotels.
/hotelsJSON object containing hotel search results
/healthExposes the health status of the application
/readyReadiness probe
/metricsExposes metrics of the application

We have to setup Load Balancer for this application to be like the following :-

traffic distribution should be as follows: 70% of the requests are going to the application written in Go, 30% of the requests are going to the application written in Java, also i will do it using Docker

I upload the code, and the application ( the two part which is Go application and Java) to my Github HERE, all the configuration has been uploaded to my github,

The Solution files like the below;

  • docker-compose.yml file in root directory is the main compose file for setting up containers for all services
  • go-app directory contains binary of Golang Application and Dockerfile of relavant setup
  • java-app directory contains binary of Java Application and Dockerfile of relavant setup
  • load-balancer directory contains nginx.conf file which is configuration file of Nginx and have load balancer rules written in it. And containers a Dockerfile for setting up Nginx with defined configurations

The final architecture will be like this instead of the image you saw above

Enjoy And Happy New Year

Osama Mustafa (The Guy who known as OsamaOracle)

Complete Automation DevOps Project Deployed on kubernetes

## Problem definition

The aim of test is to create a simple HTTP service that stores and returns configurations that satisfy certain conditions. Since I love automating things, the service should be automatically deployed to kubernetes.

You can read more about the project, once you access to my GitHub using the README.MD, I explained the project step by step also the documentation explained every thing.

the code has been uploaded to GitHub, include to this, the documentation uploaded to Slide-share.

The code configuration here

The documentation here

Enjoy

Osama

Build, Deploy and Run Node Js Application on Azure using Docker

This documentation explains step by step how to Build, Deploy and Run Node.js application on Azure using docker.

The idea was when one of the customer asked to do the automatation  them, and they already had application written using node js, so i searched online since i can’t post the code of the client here and found this sample insteaed of using the actual code 😅

Now, the readers should have knowledge with Azure Cloud, but this document will guide to create and work on Azure therfore you have to understand Azure Cloud Concept, Also basic knowledge with node js and how to write docker file, the provided everything on my Github here, the code is already sample and used to deployed on heroku, but  still can be deployed on Azure using the documentation 🤔

The documentation uploaded to my Slideshare.net here

 

Cheers

Osama

 

 

Using terraform to build AWS environment

What is Terraform?

What is Terraform?Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

by HashiCorp, an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency,

To configure terraform, you have to deal and create files with extension “.tf” like the following :-

  • Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Resource : – The resource block creates a resource of the given TYPE (first parameter) and NAME (second parameter). The combination of the type and name must be unique. Within the block (the { }) is configuration for the resource. The configuration is dependent on the type, and is documented for each resource type in the providers section. Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Variables: – defined by stating a name, type and a default value. However, the type and default values are not strictly necessary. Terraform can deduct the type of the variable from the default or input value.
  • VPC : which is used to define security group, subnets and ports in AWS environment.

In this post, i will do the following with terraform, you have to create and sign up for AWS account so you will be able to test this code and use terraform, what will i do here is

  • creating private subnet
  • creating public subnet
  • an SSH bastion on the public subnet only.
  • adding two ec2 to private subnets.

Let’s Start, as mentioned earlier you should have 4 files, provider.tf, resource.tf, varaibale.tf, and vpc.tf

Provider.tf

As you see from the below file, it’s contains our cloud provider and the region depends on varaible that will be defined later

# Define AWS as our provider
provider "aws" {
  region = "${var.aws_region}"}

resource.tf

The reosuce file where i create ssh key, to create it there are different way to do it, for example in my case i used puttygen then copied the key over here and save the public/private key so i can use them later, the other way, which is automaitcally generated., then i define which ami i will be used for the server/ec2 that will be created in AWS and the ID for this ami will be defiend in varaiable file,

# Define SSH key pair for the instances
resource "aws_key_pair" "default" {
  key_name = "terraform_key"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAtGCFXHeo9igKRzm3hNG5WZKXkQ3/NqQc1WPN8pSrEb0ZjW8mTKXRWzePuVYXYP9txqEKQmJJ1bk+pYX/zDdhJg/yZbZGH4V0LvDY5X5ndnAjN6CHkS6iK2EK1GlyJs6fsa+6oUeH23W2w49GHivSsCZUZuaSwdORoJk9QLeJ7Qz+9YQWOk0efOr+eIykxIDwR71SX5X65USbR8JbuT2kyrp1kVKsmPMcfy2Ehzd4VjShlHsZZlbzKTyfgaX/JJmYXO5yD4VLSjY8BVil4Yq/R9Tkz9pFxCG230XdFWCFEHSqS7TIDFbhPkp18jna6P6hlfNb9WM2gVZbYvr1MMnAVQ== rsa-key-20190805"
}

# Define a server 1 inside the public subnet
resource "aws_instance" "public_server" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.public-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpub.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "public_server"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server1" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server1"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server2" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server2"
  }
}

variables.tf

as you see from the below, the variables file where i defined all the infomation such as AWS region, THe Subnet that will be using, the AMI ID ( you can find it by access to aws console and copy the id), finally the SSH key path in my server/ec2.

variable "aws_region" {
  description = "Region for the VPC"
  default = "ap-southeast-1"
}

variable "vpc_cidr" {
  description = "CIDR for the VPC"
  default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
  description = "CIDR for the public subnet"
  default = "10.0.1.0/24"
}

variable "private_subnet_cidr" {
  description = "CIDR for the private subnet"
  default = "10.0.2.0/24"
}

variable "ami" {
  description = "Amazon Linux AMI"
  default = "ami-01f7527546b557442"
}

variable "key_path" {
  description = "SSH Public Key path"
  default = "~/.ssh/id_rsa.pub"
}

VPC.tf

This define anything related to network, security group and subnects in AWS Cloud, as you see from the file, i assigned one ec2/public to my public subnect in the vpc, the two ec2/private to my private subnect in the vpc file, then i condifured which port will be used on the public subnect which is ssh (22), http (80), TCP (443) and ICMP, the same for private but i open the connection between public and private using ssh which mean you can access private only by access the public server this done by the subnet also open MYSQL port which is 3306

# Define our VPC
resource "aws_vpc" "default" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true

  tags  ={
    Name = "test-vpc"
  }
}

# Define the public subnet
resource "aws_subnet" "public-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.public_subnet_cidr}"
  availability_zone = "ap-southeast-1a"

  tags =  {
    Name = "PublicSubnet"
  }
}

# Define the private subnet
resource "aws_subnet" "private-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.private_subnet_cidr}"
#  availability_zone = "ap-southeast-1"

  tags =  {
    Name = "Private Subnet"
  }
}

# Define the internet gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "VPC IGW"
  }
}

# Define the route table
resource "aws_route_table" "web-public-rt" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }

  tags =  {
    Name = "PublicSubnetRT"
  }
}

# Assign the route table to the public Subnet
resource "aws_route_table_association" "web-public-rt" {
  subnet_id = "${aws_subnet.public-subnet.id}"
  route_table_id = "${aws_route_table.web-public-rt.id}"
}

# Define the security group for public subnet
resource "aws_security_group" "sgpub" {
  name = "vpc_test_pub"
  description = "Allow incoming HTTP connections & SSH access"

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

# The ICMP packet does not have source and destination port numbers because it was designed to 
# communicate network-layer information between hosts and routers, not between application layer processes.

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks =  ["0.0.0.0/0"]
  }
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }

  vpc_id="${aws_vpc.default.id}"

  tags =  {
    Name = "Public Server SG"
  }
}

# Define the security group for private subnet
resource "aws_security_group" "sgpriv"{
  name = "sg_test_web"
  description = "Allow traffic from public subnet"

# You can delete this port, add it her to make it as real environment
  ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "PrivateServerSG"
  }
}

userdata.sh

This file used to run the command that should be run using terraform.

#!/bin/sh
set -x
# output log of userdata to /var/log/user-data.log
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
yum install -y httpd
service httpd start
chkonfig httpd on
echo "<html><h3>Welcome to the Osama Website</h3></html>" > /var/www/html/index.html

Once you preapre these files, create free tier aws ec2 machines and upload these files to folder “terraform-example”, now with these files the output will be like the following :-

  • three ec2 server with the following names: private server1 , private server 2 , public server1
  • two seucrity group for public and private
  • Key pair called terraform_key
  • the AWS region will singapore.

Now Run the following command : –

terraform plan

wait the output, the command should run successfully without any errors, you can add attribute “-output NAME_OF_LOG_FILE”, this will list you the output of the command.

terraform apply

the above command will apply everything to AWS enviroment and allow you create this environment within less than 1 min.

Amazing huh ? This is Called infrastructure as code.

The files uploaded to my Github here

Cheers

Osama

The Ultimate guide to DevOps Tools Part #4 : Docker

In this series that related to DevOps Tools that helps you as DBA to automate your work and make it easier for you , this will be the last part for Docker.

In this post i will mentioned how to pull and connect the Oracle repository with simplest way.

The first step and before do anything else you suppose to register in Oracle Repository website here

After the registration is complete you can back to docker machine and run the following command that will allow you to login like the following:-
Now after the login with your account information all you have to do choose which product you will pull and enter the command :-
The above step will take some time till it will be finished downloading.
Check the Image now :-
Start the image :-
The Docker start showing the oracle database log :-
Now access to the container using the follow step:-
Cheers 🍻
Osama 

The Ultimate guide to DevOps Tools Part #3 : Docker

Before we start please review the two post before we start

  • The Ultimate guide to DevOps Tools Part #1 : Docker here
    • Talking about docker concept , how install it.
  • The Ultimate guide to DevOps Tools Part #2 : Docker here
    • how to build your first application using docker
  • In this post i will talk about Docker Services.
as already mentioned above this post i will describe the level up about docker which is services which mean scale the application and enable load-balancing.
When i will need to create a services ?

Regarding to Docker Documentation here

To deploy an application image when Docker Engine is in swarm mode, you create a service. Frequently a service is the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
But First Let’s Understand what is the docker services ?
Simply It’s group of containers of the same image, services will make your life easier when you are planning to scale your application also it will be working on docker cloud,  to do that you should configure the service in three steps:-
  • Choose a Container Image.
  • Configure the Service.
  • Set Environment variables.
Before configure any docker services there is file called “docker-compose.yml” it’s define how the docker container will behave in the environment.
the below example show you how the file looks like ( taken from docker documentation), at the first look you will understand anything but luckily it’s very simple and easy to understand it.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:

Let’s discuss the above file :-
  • username/repo:tag –> should be replaced by your image information.
  • run the same image as 5 instance and the main name will be “web”.
  • Limitation will be 10% of CPU and 50 MB for each instance.
  • the instance will be restarted on failure.
  • mapped to port 4000 outside of docker, 80 inside the image.
  • the load balancer will be mapped also on port 80 as you see from network section called webnet.
The Real Work :-
Before anything you should run the following command to be able to work and deploy your services

docker swarm init 

But in case you are facing the below issue 
you have to upgrade your docker as method number#1 or uninstall it then install the newer version, after doing that and make sure you run the above command to ensure you will not get any error like “this node is not a swarm manager.” you can run the next command that allow you to create services.
docker stack deploy -c docker-compose.yml myfirstapp

where myfirstapp is my application name.
Get the ID for the one service in our application:
docker service ls

Search for our services name that are deployed called web with the application name which is myfristapp it will be like this myfirstapp_web.

Now are you scaled your application, 

curl -4 http://localhost:4000

Several times in a row, or go to that URL in your browser and hit refresh a few times.
Cheers   🍻🍻
Osama

The Ultimate guide to DevOps Tools Part #2 : Docker

This article will continue the basic for docker which is consider one of the DevOps Tools after finishing these series i will choose another tools that could help DBA to automate their works.

In this post i will show you how to build your first application using docker, without docker if you need to programming using language first you should install that language on your PC and test it on your development environment and for sure the production should be ready to sync and test your code again on it seems a lot of work.😥

But now with docker you just pull/grab that image, no installation needed, and run your code on that image.🎉

But how we can control what happening inside the environment, like Accessing to resources like networking interfaces and disk drives is virtualized inside this environment which is isolated from the rest of your system all of this happening by something called Dockerfile.

The Following example taken from Docker Documentation:

# Use an official Python runtime as a parent image
FROM python:2.7-slim

# Set the working directory to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

As you see from the above example the  code explained in the comment part, which is done by Python programming language, the above docker file create directory, copy ,paste and check the port then run the app.py.

app.py (very simple Code )

# Use an official Python runtime as a parent image
print("Goodbye, World!")


Now you have the dockerfile under the directory and the app.py file, then run the build command. This creates a Docker image,

docker build -t test .

Check by

$ docker image ls

 Run the app, mapping your machine’s port 4000 to the container’s published port 80 using -p:

docker run -p 4000:80 test

Once the above command will be run the log will indicates that you could test your code using this link http://localhost:80 but this is only from inside docker, so in case you need to test it outside the docker the port will be http://localhost:4000

Cheers 👌
Osama Mustafa

The Ultimate guide to DevOps Tools Part #1 : Docker

I will try to cover the Docker basics in different posts to allow people and reader understand more about this tools, also i will provide reference in each of the posts in case you need more information:-

  • Set up your Docker environment
  • Build an image and run it as one container
  • Scale your app to run multiple containers
  • Distribute your app across a cluster
  • Stack services by adding a backend database
  • Deploy your app to production
Docker Concept:-
To Understand docker more you can imagine or can been seen as computer inside your current computer, the most cool thing about docker is that you will not even feel that there is another computer running inside your computer and share the same resource of your computer, include to that if you friend ask for the  same container all you have to do is send it to them and they will have the same output for anything running at this container.
Why Should i use docker when there are similar solution :-
  • Very simple to configure.
    • Docker provides this same capability without the overhead of a virtual machine
  • Code management
    • Docker provides a consistent environment for the application from dev through production, easing the code development and deployment pipeline.
  • App Isolation.
  • Server Consolidation.
There is more than these reasons to use docker but i choose to mentioned the one i used docker for, since it will be more reliable and trusted to share something i already done it and used it before.
Basic Vocabulary that you should understand before using Docker:-
  • Container Vs Image 
    • This is very common question to people who using docker what is the difference between container and image ? so the answer is very simple, Container is running the image but not vice versa, so the container is launched by running an image, and the image is group of executable package that include everything you can imagine to run the application such as libraries, code, .. etc.
  • Containers vs Virtual Machine
    • i mentioned earlier that containers/Docker could computer inside your computer which means it’s running on your  operating system without any third party solution or client, and share the same resource of your PC, runs a discrete process, taking no more memory than any other executable, making it lightweight.
    • VM it’s totally different solution which is could be installed in two different way, the first one installed client that control the Server resource using another software such as VMware and ESXI, or the native way for example vmware workstation that installed on the guest PC.
First example on Docker

  • install Docker, Docker could be installed on different operating system distribution you can check here 
    • Yum install docker-engine
    • service docker start
  • to check the current version of docker 
    • docker –version 

[oracle@dockertest ~]$ docker –version
Docker version 1.6.1, build a8a31ef/1.6.1
[oracle@dockertest ~]$

  •  if you need more information about docker that installed on your system.
  • Need to test if your installation is correct without any issue.
  •  The last useful command which listing your image, the image as i already mentioned is executable package to run your code and each image having different executable file depends on your docker purpose.

The first command listing all the images under your machine, the second one List the hello-world image that was downloaded to your machine.

Cheers
i will update you with part 2 soon.

Osama

Steps to Create Linux Container

After my last post about Docker Project, i start testing Linux Container but this time for lxc which is another amazing Package and should be used, if you installed and follow my steps in Previous steps here  then you can use lxc command without any suffer of installation 🙂

lxc stand for Linux Container.

Now i will start to describe how to create container using this command 

  • You need to know when lxc installed, it’s create file under /etc/lxc/ called default.conf this file should contain your network interface name and you need to add it.

[root@OEL6 ~]# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = docker0
lxc.network.flags = up

Sample output for ifconfig command :

docker0   Link encap:Ethernet  HWaddr 00:00:00:00:00:00
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::28ce:eeff:fe80:1fc8/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1112 (1.0 KiB)  TX bytes:408 (408.0 b)

I used docker0 Interface because i already installed it , and configure the IP Address, if you don’t modify this file when you start container you will receive error :

[root@OEL6 container]# lxc-start –name test-container
lxc-start: failed to attach ‘vethaITNmu’ to the bridge ‘virbr0’ : No such device
lxc-start: failed to create netdev
lxc-start: failed to create the network
lxc-start: failed to spawn ‘test-container’

Anyway now after add the file with the right Interface name, you will be able to run and start working on lxc just follow the below steps :
  • Before Create any container you need to create Directory under /, if you don’t do this step another error will be appeared asking for this directory and telling you /container not found.

mkdir /container will solve the problem.

Now Let’s Start Creating new container called Test-container :

[root@OEL6 lxc]# lxc-create -n test-container -t oracle — -R 6.5

lxc-create: No config file specified, using the default config /etc/lxc/default.conf
Host is OracleServer 6.5
Create configuration file /container/test-container/config
Downloading release 6.5 for x86_64
Loaded plugins: refresh-packagekit, security
ol6_u5_base                                                     | 1.4 kB     00:00
ol6_u5_base/primary                                             | 3.2 MB     00:32
ol6_u5_base                                                                  8573/8573
Setting up Install Process

the above command will take some while to finish the configuration, it’s installing some packages needed by container you can have more than one container with different name.
After the creation is done you should be notice the below line :

Complete!
Rebuilding rpm database
Configuring container for Oracle Linux 6.5
Added container user:oracle password:oracle
Added container user:root password:root

Container : /container/test-container/rootfs
Config    : /container/test-container/config
Network   : eth0 (veth) on virbr0
‘oracle’ template installed
‘test-container’ created

the container is not started yet !!! so we need to do this using the below command :

[root@OEL6 lxc]# lxc-start -n test-container
                Welcome to Oracle Linux Server
Setting hostname test-container:                        [  OK  ]
Checking filesystems
                                                                        [  OK  ]
Mounting local filesystems:                                [  OK  ]
No such file or directory
Enabling /etc/fstab swaps:                                 [  OK  ]
Entering non-interactive startup
Bringing up loopback interface:                         [  OK  ]
Bringing up interface eth0:
Determining IP information for eth0… failed.
                                                                      [FAILED]
Starting system logger:                                    [  OK  ]
Mounting filesystems:                                      [  OK  ]
Generating SSH1 RSA host key: No such file or directory
                                                                      [  OK  ]
Generating SSH2 RSA host key: No such file or directory
                                                                     [  OK  ]
Generating SSH2 DSA host key: No such file or directory
                                                                     [  OK  ]
Starting sshd:                                                 [  OK  ]
Oracle Linux Server release 6.5
Kernel 3.8.13-16.2.1.el6uek.x86_64 on an x86_64
test-container login:

I will fix FAILED Later now you need to connect to the container using the above User name and password given to you in the above line.
After this 
[root@test-container ~]#
I am connected 🙂 

You can delete the container using command 

lxc-destroy -n test-container  

Reference :
1- Oracle Linux containers Here 
2- Oracle Linux 6.5 and Docker Here

Thank you
Osama mustafa