Step by Step – Zabbix 4.4.2 Installation and configuration on ubuntu 18.04.3 LTS

What is Zabbix ?

Zabbix is Open Source software used for monitoring the application, Server and networking, and cloud services, zabbix provide metrics, such as network utilization, CPU and disk space .. etc.

What Zabbix 4.4.2 can monitor

In this post i will try to mention everything for Zabbix installation and configuration, include to this some of the issues that you will face during the installation with screenshots, the idea of this post to help people and allow them to understand and simplify the installation/configuration.

You can Refer always to Zabbix documentation which include different versions, remember this post is for Zabbix 4.4.2 and maybe will work on other versions, but never tested before. The documentation HERE.

You can Download Zabbix From HERE, Depends on your operating system and version, Zabbix supporting two different databases during the installation which is MySQL & PostgreSQL.

Zabbix Installation

Step#1:- Install Apache, MySQL and PHP

sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php
sudo apt-get install mysql-server
sudo apt-get install php php-mbstring php-gd php-xml php-bcmath php-ldap php-mysql

Once the installation is done, you have to update the timezone for PHP, you can do this by edit the file

/etc/php/php<version>/apache2/php.ini

Search for the line include word “Date”, remember when you edit the file you will find “;” at start of the line, in php this is comment , so you have to remove and update the timezone, the List of Supported Timezones in php HERE, once you find yours edit the file.

[Date]
;http://php.net/date.timezone
date.timezone = 'Asia/Amman'

I also attached pictures to show you how the file will look like

php.ini from the inside

Step #2: Before Install Zabbix Server

before install Zabbix server, there are one required step should be done first which is enable the zabbix repository, the download Link (HERE)that mentioned above gave you the steps , again you can find the link here and choose which Operating system Zabbix will be installed on

Zabbix Download Link shows our setup

From the above, once you choose everything, scroll down to find the repository.

wget https://repo.zabbix.com/zabbix/4.4/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.4-1+bionic_all.deb
dpkg -i zabbix-release_4.4-1+bionic_all.deb
apt update

Step #3: Install Zabbix Server

sudo apt-get update
sudo apt-get install zabbix-server-mysql zabbix-frontend-php zabbix-agent

Now all the necessary packages has been installed on the operating system, the configuration start from the base which is the database.

Step #4: Create Zabbix Database Schema

Login to MySQL by the below commands to create the database and user.

mysql -uroot -p
Enter the password
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> grant all privileges on zabbix.* to zabbix@localhost identified by 'zabbix';
mysql> quit;

Now we have to import the data into the schema we just create, to do this we should run the below command, notice these steps should be done in order otherwise an issue will be appeared, You will be prompted to enter your newly created password.

zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -u zabbix -p zabbix

Step #5: Configure Zabbix file to access the database.

Edit the following file

 /etc/zabbix/zabbix_server.conf

Search for the following Lines using your favorite editor either vi, vim , nano .. etc

DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=password

Step #6:- Restart the services to make everything ok

Start Zabbix server and agent processes and make it start at system boot.

systemctl restart zabbix-server zabbix-agent apache2
systemctl enable zabbix-server zabbix-agent apache2

OR

sudo service apache2 restart
sudo service zabbix-server restart

After starting the Zabbix service, let’s go to Zabbix web installer and finish the installation.

Connect to your newly installed Zabbix frontend: http://server_ip_or_name/

Before doing this, which is access to above Link, i faced an issue, which is i was 100% everything is Ok, but everytime i access the link, HTTP response code with 404 NOT FOUND, i was searching about this, and the solution was like the following

Change the directory to

/etc/apache2/sites-available

Under this location, you will find two file

  • 000-default. conf
  • default

Edit these two file to change the following Line

Change DocumentRoot /var/www/html -- > DocumentRoot /usr/share/zabbix

Restart the Apache again, Now the Link will work.

Step #7: complete the configuration via Zabbix Web Installer.

  • Zabbix Setup Welcome Screen
Zabbix Welcome Screen
  • Check for pre-requisites
pre-requisites everything should be OK

to solve the above you have to fix the value from one file which is

/etc/php/php<version>/apache2/php.ini

and search for the option, for example, post_max_size the current value is 8M just change to 16M and So on, remember after you change the value you have to restart the Apache to take the effect and then check the pre-requisites again.

after fixing the Values.
  • Configure DB Connection

Enter database details created in Step #4 and click next to continue.

Database Details in Step #4
  • Zabbix Server Details

This is the host and port of running Zabbix server, Don’t change the values of the port and host, no need to do that, since it’s running on the same server, for the Name, you can give name for the instance.

Zabbix Server Details
  • Pre-Installation Summary
Summary
  • Done

Enjoy the zabbix ☠🍻

Osama

How to setup GitHub for the first time

Make your life easier by using one Repository you can either do it for your indiviusal use or for company, Github consider as one of the most common DevOps tools.

In this post i will show how to create GitHub and use it for the first time, for more advance topics about it please review the documentation that i already mentioned in the document.

Access to document from here

Enjoy

Osama

Configure LB Using Nginx

Happy New Year Everyone

This is the first blog post for the 2020, i Wish everyone will have healthy and wonderful year, may your dreams come true.

I post recently or let’s say Last Year about Full automation project using DevOps tools, and i didn’t to be honest except that much download and questions on that post. you can read it from here.

I decided to create new project but this time to show the power of DevOps and how to use it more in your daily Job, task or even configuration.

The idea of this project like the following:-

  • You have two code, one Go based application, one Java-based application. Both are providing an HTTP service with the same endpoints.
  • The Endpoints which is :-
RouteDescription
/A static site. Should not appear in the final setup as it is but redirect to /hotels.
/hotelsJSON object containing hotel search results
/healthExposes the health status of the application
/readyReadiness probe
/metricsExposes metrics of the application

We have to setup Load Balancer for this application to be like the following :-

traffic distribution should be as follows: 70% of the requests are going to the application written in Go, 30% of the requests are going to the application written in Java, also i will do it using Docker

I upload the code, and the application ( the two part which is Go application and Java) to my Github HERE, all the configuration has been uploaded to my github,

The Solution files like the below;

  • docker-compose.yml file in root directory is the main compose file for setting up containers for all services
  • go-app directory contains binary of Golang Application and Dockerfile of relavant setup
  • java-app directory contains binary of Java Application and Dockerfile of relavant setup
  • load-balancer directory contains nginx.conf file which is configuration file of Nginx and have load balancer rules written in it. And containers a Dockerfile for setting up Nginx with defined configurations

The final architecture will be like this instead of the image you saw above

Enjoy And Happy New Year

Osama Mustafa (The Guy who known as OsamaOracle)

Complete Automation DevOps Project Deployed on kubernetes

## Problem definition

The aim of test is to create a simple HTTP service that stores and returns configurations that satisfy certain conditions. Since I love automating things, the service should be automatically deployed to kubernetes.

You can read more about the project, once you access to my GitHub using the README.MD, I explained the project step by step also the documentation explained every thing.

the code has been uploaded to GitHub, include to this, the documentation uploaded to Slide-share.

The code configuration here

The documentation here

Enjoy

Osama

Build, Deploy and Run Node Js Application on Azure using Docker

This documentation explains step by step how to Build, Deploy and Run Node.js application on Azure using docker.

The idea was when one of the customer asked to do the automatation  them, and they already had application written using node js, so i searched online since i can’t post the code of the client here and found this sample insteaed of using the actual code 😅

Now, the readers should have knowledge with Azure Cloud, but this document will guide to create and work on Azure therfore you have to understand Azure Cloud Concept, Also basic knowledge with node js and how to write docker file, the provided everything on my Github here, the code is already sample and used to deployed on heroku, but  still can be deployed on Azure using the documentation 🤔

The documentation uploaded to my Slideshare.net here

 

Cheers

Osama

 

 

Using terraform to build AWS environment

What is Terraform?

What is Terraform?Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

by HashiCorp, an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency,

To configure terraform, you have to deal and create files with extension “.tf” like the following :-

  • Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Resource : – The resource block creates a resource of the given TYPE (first parameter) and NAME (second parameter). The combination of the type and name must be unique. Within the block (the { }) is configuration for the resource. The configuration is dependent on the type, and is documented for each resource type in the providers section. Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Variables: – defined by stating a name, type and a default value. However, the type and default values are not strictly necessary. Terraform can deduct the type of the variable from the default or input value.
  • VPC : which is used to define security group, subnets and ports in AWS environment.

In this post, i will do the following with terraform, you have to create and sign up for AWS account so you will be able to test this code and use terraform, what will i do here is

  • creating private subnet
  • creating public subnet
  • an SSH bastion on the public subnet only.
  • adding two ec2 to private subnets.

Let’s Start, as mentioned earlier you should have 4 files, provider.tf, resource.tf, varaibale.tf, and vpc.tf

Provider.tf

As you see from the below file, it’s contains our cloud provider and the region depends on varaible that will be defined later

# Define AWS as our provider
provider "aws" {
  region = "${var.aws_region}"}

resource.tf

The reosuce file where i create ssh key, to create it there are different way to do it, for example in my case i used puttygen then copied the key over here and save the public/private key so i can use them later, the other way, which is automaitcally generated., then i define which ami i will be used for the server/ec2 that will be created in AWS and the ID for this ami will be defiend in varaiable file,

# Define SSH key pair for the instances
resource "aws_key_pair" "default" {
  key_name = "terraform_key"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAtGCFXHeo9igKRzm3hNG5WZKXkQ3/NqQc1WPN8pSrEb0ZjW8mTKXRWzePuVYXYP9txqEKQmJJ1bk+pYX/zDdhJg/yZbZGH4V0LvDY5X5ndnAjN6CHkS6iK2EK1GlyJs6fsa+6oUeH23W2w49GHivSsCZUZuaSwdORoJk9QLeJ7Qz+9YQWOk0efOr+eIykxIDwR71SX5X65USbR8JbuT2kyrp1kVKsmPMcfy2Ehzd4VjShlHsZZlbzKTyfgaX/JJmYXO5yD4VLSjY8BVil4Yq/R9Tkz9pFxCG230XdFWCFEHSqS7TIDFbhPkp18jna6P6hlfNb9WM2gVZbYvr1MMnAVQ== rsa-key-20190805"
}

# Define a server 1 inside the public subnet
resource "aws_instance" "public_server" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.public-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpub.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "public_server"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server1" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server1"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server2" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server2"
  }
}

variables.tf

as you see from the below, the variables file where i defined all the infomation such as AWS region, THe Subnet that will be using, the AMI ID ( you can find it by access to aws console and copy the id), finally the SSH key path in my server/ec2.

variable "aws_region" {
  description = "Region for the VPC"
  default = "ap-southeast-1"
}

variable "vpc_cidr" {
  description = "CIDR for the VPC"
  default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
  description = "CIDR for the public subnet"
  default = "10.0.1.0/24"
}

variable "private_subnet_cidr" {
  description = "CIDR for the private subnet"
  default = "10.0.2.0/24"
}

variable "ami" {
  description = "Amazon Linux AMI"
  default = "ami-01f7527546b557442"
}

variable "key_path" {
  description = "SSH Public Key path"
  default = "~/.ssh/id_rsa.pub"
}

VPC.tf

This define anything related to network, security group and subnects in AWS Cloud, as you see from the file, i assigned one ec2/public to my public subnect in the vpc, the two ec2/private to my private subnect in the vpc file, then i condifured which port will be used on the public subnect which is ssh (22), http (80), TCP (443) and ICMP, the same for private but i open the connection between public and private using ssh which mean you can access private only by access the public server this done by the subnet also open MYSQL port which is 3306

# Define our VPC
resource "aws_vpc" "default" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true

  tags  ={
    Name = "test-vpc"
  }
}

# Define the public subnet
resource "aws_subnet" "public-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.public_subnet_cidr}"
  availability_zone = "ap-southeast-1a"

  tags =  {
    Name = "PublicSubnet"
  }
}

# Define the private subnet
resource "aws_subnet" "private-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.private_subnet_cidr}"
#  availability_zone = "ap-southeast-1"

  tags =  {
    Name = "Private Subnet"
  }
}

# Define the internet gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "VPC IGW"
  }
}

# Define the route table
resource "aws_route_table" "web-public-rt" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }

  tags =  {
    Name = "PublicSubnetRT"
  }
}

# Assign the route table to the public Subnet
resource "aws_route_table_association" "web-public-rt" {
  subnet_id = "${aws_subnet.public-subnet.id}"
  route_table_id = "${aws_route_table.web-public-rt.id}"
}

# Define the security group for public subnet
resource "aws_security_group" "sgpub" {
  name = "vpc_test_pub"
  description = "Allow incoming HTTP connections & SSH access"

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

# The ICMP packet does not have source and destination port numbers because it was designed to 
# communicate network-layer information between hosts and routers, not between application layer processes.

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks =  ["0.0.0.0/0"]
  }
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }

  vpc_id="${aws_vpc.default.id}"

  tags =  {
    Name = "Public Server SG"
  }
}

# Define the security group for private subnet
resource "aws_security_group" "sgpriv"{
  name = "sg_test_web"
  description = "Allow traffic from public subnet"

# You can delete this port, add it her to make it as real environment
  ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "PrivateServerSG"
  }
}

userdata.sh

This file used to run the command that should be run using terraform.

#!/bin/sh
set -x
# output log of userdata to /var/log/user-data.log
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
yum install -y httpd
service httpd start
chkonfig httpd on
echo "<html><h3>Welcome to the Osama Website</h3></html>" > /var/www/html/index.html

Once you preapre these files, create free tier aws ec2 machines and upload these files to folder “terraform-example”, now with these files the output will be like the following :-

  • three ec2 server with the following names: private server1 , private server 2 , public server1
  • two seucrity group for public and private
  • Key pair called terraform_key
  • the AWS region will singapore.

Now Run the following command : –

terraform plan

wait the output, the command should run successfully without any errors, you can add attribute “-output NAME_OF_LOG_FILE”, this will list you the output of the command.

terraform apply

the above command will apply everything to AWS enviroment and allow you create this environment within less than 1 min.

Amazing huh ? This is Called infrastructure as code.

The files uploaded to my Github here

Cheers

Osama

The Ultimate guide to DevOps Tools Part #4 : Docker

In this series that related to DevOps Tools that helps you as DBA to automate your work and make it easier for you , this will be the last part for Docker.

In this post i will mentioned how to pull and connect the Oracle repository with simplest way.

The first step and before do anything else you suppose to register in Oracle Repository website here

After the registration is complete you can back to docker machine and run the following command that will allow you to login like the following:-
Now after the login with your account information all you have to do choose which product you will pull and enter the command :-
The above step will take some time till it will be finished downloading.
Check the Image now :-
Start the image :-
The Docker start showing the oracle database log :-
Now access to the container using the follow step:-
Cheers 🍻
Osama 

The Ultimate guide to DevOps Tools Part #3 : Docker

Before we start please review the two post before we start

  • The Ultimate guide to DevOps Tools Part #1 : Docker here
    • Talking about docker concept , how install it.
  • The Ultimate guide to DevOps Tools Part #2 : Docker here
    • how to build your first application using docker
  • In this post i will talk about Docker Services.
as already mentioned above this post i will describe the level up about docker which is services which mean scale the application and enable load-balancing.
When i will need to create a services ?

Regarding to Docker Documentation here

To deploy an application image when Docker Engine is in swarm mode, you create a service. Frequently a service is the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
But First Let’s Understand what is the docker services ?
Simply It’s group of containers of the same image, services will make your life easier when you are planning to scale your application also it will be working on docker cloud,  to do that you should configure the service in three steps:-
  • Choose a Container Image.
  • Configure the Service.
  • Set Environment variables.
Before configure any docker services there is file called “docker-compose.yml” it’s define how the docker container will behave in the environment.
the below example show you how the file looks like ( taken from docker documentation), at the first look you will understand anything but luckily it’s very simple and easy to understand it.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:

Let’s discuss the above file :-
  • username/repo:tag –> should be replaced by your image information.
  • run the same image as 5 instance and the main name will be “web”.
  • Limitation will be 10% of CPU and 50 MB for each instance.
  • the instance will be restarted on failure.
  • mapped to port 4000 outside of docker, 80 inside the image.
  • the load balancer will be mapped also on port 80 as you see from network section called webnet.
The Real Work :-
Before anything you should run the following command to be able to work and deploy your services

docker swarm init 

But in case you are facing the below issue 
you have to upgrade your docker as method number#1 or uninstall it then install the newer version, after doing that and make sure you run the above command to ensure you will not get any error like “this node is not a swarm manager.” you can run the next command that allow you to create services.
docker stack deploy -c docker-compose.yml myfirstapp

where myfirstapp is my application name.
Get the ID for the one service in our application:
docker service ls

Search for our services name that are deployed called web with the application name which is myfristapp it will be like this myfirstapp_web.

Now are you scaled your application, 

curl -4 http://localhost:4000

Several times in a row, or go to that URL in your browser and hit refresh a few times.
Cheers   🍻🍻
Osama

The Ultimate guide to DevOps Tools Part #2 : Docker

This article will continue the basic for docker which is consider one of the DevOps Tools after finishing these series i will choose another tools that could help DBA to automate their works.

In this post i will show you how to build your first application using docker, without docker if you need to programming using language first you should install that language on your PC and test it on your development environment and for sure the production should be ready to sync and test your code again on it seems a lot of work.😥

But now with docker you just pull/grab that image, no installation needed, and run your code on that image.🎉

But how we can control what happening inside the environment, like Accessing to resources like networking interfaces and disk drives is virtualized inside this environment which is isolated from the rest of your system all of this happening by something called Dockerfile.

The Following example taken from Docker Documentation:

# Use an official Python runtime as a parent image
FROM python:2.7-slim

# Set the working directory to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

As you see from the above example the  code explained in the comment part, which is done by Python programming language, the above docker file create directory, copy ,paste and check the port then run the app.py.

app.py (very simple Code )

# Use an official Python runtime as a parent image
print("Goodbye, World!")


Now you have the dockerfile under the directory and the app.py file, then run the build command. This creates a Docker image,

docker build -t test .

Check by

$ docker image ls

 Run the app, mapping your machine’s port 4000 to the container’s published port 80 using -p:

docker run -p 4000:80 test

Once the above command will be run the log will indicates that you could test your code using this link http://localhost:80 but this is only from inside docker, so in case you need to test it outside the docker the port will be http://localhost:4000

Cheers 👌
Osama Mustafa