Auto Closing Trigger in Zabbix

There are different problems category in zabbix which is like the below:-

  • Disaster
  • High
  • Average
  • Warning
  • Information
  • Not classified

I will focus in this post on Information and Not Classified Issues.

These two Problem category to inform users about some change in the operating system for example, such as Updating the operating system, or You have CPU utilization, this is not the issue, the problem with these two category they never disappeared from Zabbix Dashboard, and to be honest for me it’s annoying 😅

Therefore, i was trying to find a way to hide these issues, i don’t want to disabled the trigger because in that case it will never appeared and this is not the goal for me,I just want them to be on Dashboard for certain amount of time.

I found two ways for that after investigation, the one will be done from Database level, for example MySQL in my case, the other one via Zabbix Dashboard ( Trigger part) this is the right one and what i want.

Method #1 

To fix this issue from the database , you have to follow the below : –

  • You should have access to MySQL
  • Use Zabbix Database
  • Knowledge of database and Zabbix table, you don’t want to corrupt the database.
  • there are different tables we need to deal with
triggers table
problem table
alerts table
  • You have to know, inside trigger table value column contains two number which 0 (you don’t have any error) and 1 (you have problem)
select * from triggers where description like '%Operating%' and value = 1;
update triggers set value = 0 where  description like '%Operating%' and value = 1;

The above Select search for trigger called Operating system and the trigger should have issue, then update this trigger to 0 to assign this issue to ok.

Method #2

This method i preferred but i had to mentioned the previous one so in case someone wants to use it, in this one you don’t have to access to the database at all.

  • From The upper panel –> Choose configuration –> Host –> search for your host –> Press on Trigger –> Search for the trigger.
  • Once you press on the trigger you want, it will show couple of Options , we will care only about “problem expression”
  • From problem expression press add button and choose the item that related to this trigger.
  • There is Function inside the zabbix ( Built-in) called nodata takes second as parameter, you can put the item and choose the function then set the time for 10 sec same as the below picture.

Enjoy 👊

Cheers 🍻

Osama

Install Docker on ubuntu 18.04.3 LTS

What is Docker ?

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.

Why Docker ? 

  • Agility
  • Simplicity
  • Choice

Docker Installation on ubuntu 18.04.3 LTS

In this post, i will show you step by step to install docker on Ubuntu operating system, i prefer to create account on docker hub here.

Step#1 :-

update the existing list of packages using

sudo apt update

Step #2 :

install prerequisite packages which will let apt use packages over HTTPS

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Step #3: –

add GPG key for the official Docker repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Step #4 :-

Add the Docker repository to APT sources

sudo add-apt-repository  "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"

Step #5: –

Install Docker

sudo apt-get install docker-ce

Step #6: –

Verify that Docker CE is installed correctly by running the hello-world image.

sudo docker run hello-world

You can could face an issue related to Hello-world Command when you are trying to verify the installation which is

permission denied while trying to connect to the docker daemon socket at unix
/var/run/docker.sock: connect: permission denied 

You can solve this issue by fire the below command and try again

sudo chmod 666 /var/run/docker.sock

Also you can list all the available docker version By running the below command

apt list -a docker-ce

Check if the services is up and running: –

sudo systemctl status docker

Reference 

Docker documentation here

Cheer and enjoy the docker

Osama

Step by Step – Zabbix 4.4.2 Installation and configuration on ubuntu 18.04.3 LTS

What is Zabbix ?

Zabbix is Open Source software used for monitoring the application, Server and networking, and cloud services, zabbix provide metrics, such as network utilization, CPU and disk space .. etc.

What Zabbix 4.4.2 can monitor

In this post i will try to mention everything for Zabbix installation and configuration, include to this some of the issues that you will face during the installation with screenshots, the idea of this post to help people and allow them to understand and simplify the installation/configuration.

You can Refer always to Zabbix documentation which include different versions, remember this post is for Zabbix 4.4.2 and maybe will work on other versions, but never tested before. The documentation HERE.

You can Download Zabbix From HERE, Depends on your operating system and version, Zabbix supporting two different databases during the installation which is MySQL & PostgreSQL.

Zabbix Installation

Step#1:- Install Apache, MySQL and PHP

sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php
sudo apt-get install mysql-server
sudo apt-get install php php-mbstring php-gd php-xml php-bcmath php-ldap php-mysql

Once the installation is done, you have to update the timezone for PHP, you can do this by edit the file

/etc/php/php<version>/apache2/php.ini

Search for the line include word “Date”, remember when you edit the file you will find “;” at start of the line, in php this is comment , so you have to remove and update the timezone, the List of Supported Timezones in php HERE, once you find yours edit the file.

[Date]
;http://php.net/date.timezone
date.timezone = 'Asia/Amman'

I also attached pictures to show you how the file will look like

php.ini from the inside

Step #2: Before Install Zabbix Server

before install Zabbix server, there are one required step should be done first which is enable the zabbix repository, the download Link (HERE)that mentioned above gave you the steps , again you can find the link here and choose which Operating system Zabbix will be installed on

Zabbix Download Link shows our setup

From the above, once you choose everything, scroll down to find the repository.

wget https://repo.zabbix.com/zabbix/4.4/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.4-1+bionic_all.deb
dpkg -i zabbix-release_4.4-1+bionic_all.deb
apt update

Step #3: Install Zabbix Server

sudo apt-get update
sudo apt-get install zabbix-server-mysql zabbix-frontend-php zabbix-agent

Now all the necessary packages has been installed on the operating system, the configuration start from the base which is the database.

Step #4: Create Zabbix Database Schema

Login to MySQL by the below commands to create the database and user.

mysql -uroot -p
Enter the password
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> grant all privileges on zabbix.* to zabbix@localhost identified by 'zabbix';
mysql> quit;

Now we have to import the data into the schema we just create, to do this we should run the below command, notice these steps should be done in order otherwise an issue will be appeared, You will be prompted to enter your newly created password.

zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -u zabbix -p zabbix

Step #5: Configure Zabbix file to access the database.

Edit the following file

 /etc/zabbix/zabbix_server.conf

Search for the following Lines using your favorite editor either vi, vim , nano .. etc

DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=password

Step #6:- Restart the services to make everything ok

Start Zabbix server and agent processes and make it start at system boot.

systemctl restart zabbix-server zabbix-agent apache2
systemctl enable zabbix-server zabbix-agent apache2

OR

sudo service apache2 restart
sudo service zabbix-server restart

After starting the Zabbix service, let’s go to Zabbix web installer and finish the installation.

Connect to your newly installed Zabbix frontend: http://server_ip_or_name/

Before doing this, which is access to above Link, i faced an issue, which is i was 100% everything is Ok, but everytime i access the link, HTTP response code with 404 NOT FOUND, i was searching about this, and the solution was like the following

Change the directory to

/etc/apache2/sites-available

Under this location, you will find two file

  • 000-default. conf
  • default

Edit these two file to change the following Line

Change DocumentRoot /var/www/html -- > DocumentRoot /usr/share/zabbix

Restart the Apache again, Now the Link will work.

Step #7: complete the configuration via Zabbix Web Installer.

  • Zabbix Setup Welcome Screen
Zabbix Welcome Screen
  • Check for pre-requisites
pre-requisites everything should be OK

to solve the above you have to fix the value from one file which is

/etc/php/php<version>/apache2/php.ini

and search for the option, for example, post_max_size the current value is 8M just change to 16M and So on, remember after you change the value you have to restart the Apache to take the effect and then check the pre-requisites again.

after fixing the Values.
  • Configure DB Connection

Enter database details created in Step #4 and click next to continue.

Database Details in Step #4
  • Zabbix Server Details

This is the host and port of running Zabbix server, Don’t change the values of the port and host, no need to do that, since it’s running on the same server, for the Name, you can give name for the instance.

Zabbix Server Details
  • Pre-Installation Summary
Summary
  • Done

Enjoy the zabbix ☠🍻

Osama

How to setup GitHub for the first time

Make your life easier by using one Repository you can either do it for your indiviusal use or for company, Github consider as one of the most common DevOps tools.

In this post i will show how to create GitHub and use it for the first time, for more advance topics about it please review the documentation that i already mentioned in the document.

Access to document from here

Enjoy

Osama

Configure LB Using Nginx

Happy New Year Everyone

This is the first blog post for the 2020, i Wish everyone will have healthy and wonderful year, may your dreams come true.

I post recently or let’s say Last Year about Full automation project using DevOps tools, and i didn’t to be honest except that much download and questions on that post. you can read it from here.

I decided to create new project but this time to show the power of DevOps and how to use it more in your daily Job, task or even configuration.

The idea of this project like the following:-

  • You have two code, one Go based application, one Java-based application. Both are providing an HTTP service with the same endpoints.
  • The Endpoints which is :-
RouteDescription
/A static site. Should not appear in the final setup as it is but redirect to /hotels.
/hotelsJSON object containing hotel search results
/healthExposes the health status of the application
/readyReadiness probe
/metricsExposes metrics of the application

We have to setup Load Balancer for this application to be like the following :-

traffic distribution should be as follows: 70% of the requests are going to the application written in Go, 30% of the requests are going to the application written in Java, also i will do it using Docker

I upload the code, and the application ( the two part which is Go application and Java) to my Github HERE, all the configuration has been uploaded to my github,

The Solution files like the below;

  • docker-compose.yml file in root directory is the main compose file for setting up containers for all services
  • go-app directory contains binary of Golang Application and Dockerfile of relavant setup
  • java-app directory contains binary of Java Application and Dockerfile of relavant setup
  • load-balancer directory contains nginx.conf file which is configuration file of Nginx and have load balancer rules written in it. And containers a Dockerfile for setting up Nginx with defined configurations

The final architecture will be like this instead of the image you saw above

Enjoy And Happy New Year

Osama Mustafa (The Guy who known as OsamaOracle)

Complete Automation DevOps Project Deployed on kubernetes

## Problem definition

The aim of test is to create a simple HTTP service that stores and returns configurations that satisfy certain conditions. Since I love automating things, the service should be automatically deployed to kubernetes.

You can read more about the project, once you access to my GitHub using the README.MD, I explained the project step by step also the documentation explained every thing.

the code has been uploaded to GitHub, include to this, the documentation uploaded to Slide-share.

The code configuration here

The documentation here

Enjoy

Osama

Build, Deploy and Run Node Js Application on Azure using Docker

This documentation explains step by step how to Build, Deploy and Run Node.js application on Azure using docker.

The idea was when one of the customer asked to do the automatation  them, and they already had application written using node js, so i searched online since i can’t post the code of the client here and found this sample insteaed of using the actual code 😅

Now, the readers should have knowledge with Azure Cloud, but this document will guide to create and work on Azure therfore you have to understand Azure Cloud Concept, Also basic knowledge with node js and how to write docker file, the provided everything on my Github here, the code is already sample and used to deployed on heroku, but  still can be deployed on Azure using the documentation 🤔

The documentation uploaded to my Slideshare.net here

 

Cheers

Osama

 

 

Using terraform to build AWS environment

What is Terraform?

What is Terraform?Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

by HashiCorp, an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency,

To configure terraform, you have to deal and create files with extension “.tf” like the following :-

  • Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Resource : – The resource block creates a resource of the given TYPE (first parameter) and NAME (second parameter). The combination of the type and name must be unique. Within the block (the { }) is configuration for the resource. The configuration is dependent on the type, and is documented for each resource type in the providers section. Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Variables: – defined by stating a name, type and a default value. However, the type and default values are not strictly necessary. Terraform can deduct the type of the variable from the default or input value.
  • VPC : which is used to define security group, subnets and ports in AWS environment.

In this post, i will do the following with terraform, you have to create and sign up for AWS account so you will be able to test this code and use terraform, what will i do here is

  • creating private subnet
  • creating public subnet
  • an SSH bastion on the public subnet only.
  • adding two ec2 to private subnets.

Let’s Start, as mentioned earlier you should have 4 files, provider.tf, resource.tf, varaibale.tf, and vpc.tf

Provider.tf

As you see from the below file, it’s contains our cloud provider and the region depends on varaible that will be defined later

# Define AWS as our provider
provider "aws" {
  region = "${var.aws_region}"}

resource.tf

The reosuce file where i create ssh key, to create it there are different way to do it, for example in my case i used puttygen then copied the key over here and save the public/private key so i can use them later, the other way, which is automaitcally generated., then i define which ami i will be used for the server/ec2 that will be created in AWS and the ID for this ami will be defiend in varaiable file,

# Define SSH key pair for the instances
resource "aws_key_pair" "default" {
  key_name = "terraform_key"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAtGCFXHeo9igKRzm3hNG5WZKXkQ3/NqQc1WPN8pSrEb0ZjW8mTKXRWzePuVYXYP9txqEKQmJJ1bk+pYX/zDdhJg/yZbZGH4V0LvDY5X5ndnAjN6CHkS6iK2EK1GlyJs6fsa+6oUeH23W2w49GHivSsCZUZuaSwdORoJk9QLeJ7Qz+9YQWOk0efOr+eIykxIDwR71SX5X65USbR8JbuT2kyrp1kVKsmPMcfy2Ehzd4VjShlHsZZlbzKTyfgaX/JJmYXO5yD4VLSjY8BVil4Yq/R9Tkz9pFxCG230XdFWCFEHSqS7TIDFbhPkp18jna6P6hlfNb9WM2gVZbYvr1MMnAVQ== rsa-key-20190805"
}

# Define a server 1 inside the public subnet
resource "aws_instance" "public_server" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.public-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpub.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "public_server"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server1" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server1"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server2" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server2"
  }
}

variables.tf

as you see from the below, the variables file where i defined all the infomation such as AWS region, THe Subnet that will be using, the AMI ID ( you can find it by access to aws console and copy the id), finally the SSH key path in my server/ec2.

variable "aws_region" {
  description = "Region for the VPC"
  default = "ap-southeast-1"
}

variable "vpc_cidr" {
  description = "CIDR for the VPC"
  default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
  description = "CIDR for the public subnet"
  default = "10.0.1.0/24"
}

variable "private_subnet_cidr" {
  description = "CIDR for the private subnet"
  default = "10.0.2.0/24"
}

variable "ami" {
  description = "Amazon Linux AMI"
  default = "ami-01f7527546b557442"
}

variable "key_path" {
  description = "SSH Public Key path"
  default = "~/.ssh/id_rsa.pub"
}

VPC.tf

This define anything related to network, security group and subnects in AWS Cloud, as you see from the file, i assigned one ec2/public to my public subnect in the vpc, the two ec2/private to my private subnect in the vpc file, then i condifured which port will be used on the public subnect which is ssh (22), http (80), TCP (443) and ICMP, the same for private but i open the connection between public and private using ssh which mean you can access private only by access the public server this done by the subnet also open MYSQL port which is 3306

# Define our VPC
resource "aws_vpc" "default" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true

  tags  ={
    Name = "test-vpc"
  }
}

# Define the public subnet
resource "aws_subnet" "public-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.public_subnet_cidr}"
  availability_zone = "ap-southeast-1a"

  tags =  {
    Name = "PublicSubnet"
  }
}

# Define the private subnet
resource "aws_subnet" "private-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.private_subnet_cidr}"
#  availability_zone = "ap-southeast-1"

  tags =  {
    Name = "Private Subnet"
  }
}

# Define the internet gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "VPC IGW"
  }
}

# Define the route table
resource "aws_route_table" "web-public-rt" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }

  tags =  {
    Name = "PublicSubnetRT"
  }
}

# Assign the route table to the public Subnet
resource "aws_route_table_association" "web-public-rt" {
  subnet_id = "${aws_subnet.public-subnet.id}"
  route_table_id = "${aws_route_table.web-public-rt.id}"
}

# Define the security group for public subnet
resource "aws_security_group" "sgpub" {
  name = "vpc_test_pub"
  description = "Allow incoming HTTP connections & SSH access"

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

# The ICMP packet does not have source and destination port numbers because it was designed to 
# communicate network-layer information between hosts and routers, not between application layer processes.

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks =  ["0.0.0.0/0"]
  }
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }

  vpc_id="${aws_vpc.default.id}"

  tags =  {
    Name = "Public Server SG"
  }
}

# Define the security group for private subnet
resource "aws_security_group" "sgpriv"{
  name = "sg_test_web"
  description = "Allow traffic from public subnet"

# You can delete this port, add it her to make it as real environment
  ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "PrivateServerSG"
  }
}

userdata.sh

This file used to run the command that should be run using terraform.

#!/bin/sh
set -x
# output log of userdata to /var/log/user-data.log
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
yum install -y httpd
service httpd start
chkonfig httpd on
echo "<html><h3>Welcome to the Osama Website</h3></html>" > /var/www/html/index.html

Once you preapre these files, create free tier aws ec2 machines and upload these files to folder “terraform-example”, now with these files the output will be like the following :-

  • three ec2 server with the following names: private server1 , private server 2 , public server1
  • two seucrity group for public and private
  • Key pair called terraform_key
  • the AWS region will singapore.

Now Run the following command : –

terraform plan

wait the output, the command should run successfully without any errors, you can add attribute “-output NAME_OF_LOG_FILE”, this will list you the output of the command.

terraform apply

the above command will apply everything to AWS enviroment and allow you create this environment within less than 1 min.

Amazing huh ? This is Called infrastructure as code.

The files uploaded to my Github here

Cheers

Osama

The Ultimate guide to DevOps Tools Part #4 : Docker

In this series that related to DevOps Tools that helps you as DBA to automate your work and make it easier for you , this will be the last part for Docker.

In this post i will mentioned how to pull and connect the Oracle repository with simplest way.

The first step and before do anything else you suppose to register in Oracle Repository website here

After the registration is complete you can back to docker machine and run the following command that will allow you to login like the following:-
Now after the login with your account information all you have to do choose which product you will pull and enter the command :-
The above step will take some time till it will be finished downloading.
Check the Image now :-
Start the image :-
The Docker start showing the oracle database log :-
Now access to the container using the follow step:-
Cheers 🍻
Osama