BLOG

Java Business Service/Siebel

Java Business Service (JBS) is a service framework that allows custom business services to be implemented in Java and run from a Siebel application.

If you have experience with java, you would likely to create Business service in java which you will find it easy.

Scenario: We wanted a business service to convert Gorgerin Time to Hijri Time, We did it in java then created a JBS then implement it in Siebel.

 

Steps :

  • Java Configuration in CFG.
  • Adding The required jar and jdk.
  • Creating the code and Exporting the jar file.
  • Creating Business service in Tools.

 

The following document discuss these steps in detais, you can access it here.

Cheers 🍻

Osama

Shell Scripting for begginers

Bash Scripting is one of the skills that any system adminsitrator or DBA’s should knows, why ? because it’s making your life much easier, for example, imagine you are about to change the persimisson of files for example under one directory, and inside it you have 10 files (i know it’s that much), Will you do it one by one, it will be a good idea, maybe you are fast typer but what if it will be more than 10 ?
Shell Scripting is the solution for this.

Bash is comamnd language and it’s widely available on various operating system, the name it’s self came from Bourne-Again SHell

Shell which is allows you for interactive or non-interactive execution.

Finally; which is scripting, the commands that will be executed one by one.

One of the simplest example which is “Hello World”, Save it as hello.sh

 

#!/bin/sh
echo “Hello, World”

if we need to modify this script to read name of the user; it will be like this :-

#!/bin/sh
echo “What is your name?”
read MY_NAME
echo “Hello, $MY_NAME”

Mathamtics examples :-

Shell Scripting can be used also in mathamtics operations such as the below:-

#!/bin/bash
((sum=25+35))
echo $sum
#!/bin/bash
((area =2*2))
echo $area

Looping :-

If you want to repeat something, then you should use either for or while which will make your life easier:-

The following example, will print the counter number each time, as you see the counter start from 5 and it will out once it will be 0

#!/bin/bash
for (( counter=5; counter>0; counter– ))
do
echo -n “$counter ”
done
printf “\n”

Using While but once it will be 5 the loop will be terminated.

#!/bin/bash
Bol_value=true
count=1
while [ $Bol_value ]
do
echo $count
if [ $count -eq 5 ];
then
break
fi
((count++))
done

 

Operator Description
-eq Checks if the value of two operands are equal or not; if yes, then the condition becomes true.
-ne Checks if the value of two operands are equal or not; if values are not equal, then the condition becomes true.
-gt Checks if the value of left operand is greater than the value of right operand; if yes, then the condition becomes true.
-lt Checks if the value of left operand is less than the value of right operand; if yes, then the condition becomes true.
-ge Checks if the value of left operand is greater than or equal to the value of right operand; if yes, then the condition becomes true
le Checks if the value of left operand is less than or equal to the value of right operand; if yes, then the condition becomes true.

 

If you want to learn more from here

Summary:-

  • Shell is a program which interprets user commands through CLI like Terminal.
  • Shell scripting is writing a series of command to execute.
  • Shell scripting can help you create complex programs

 

Cheers

Osama

Build, Deploy and Run Node Js Application on Azure using Docker

This documentation explains step by step how to Build, Deploy and Run Node.js application on Azure using docker.

The idea was when one of the customer asked to do the automatation  them, and they already had application written using node js, so i searched online since i can’t post the code of the client here and found this sample insteaed of using the actual code 😅

Now, the readers should have knowledge with Azure Cloud, but this document will guide to create and work on Azure therfore you have to understand Azure Cloud Concept, Also basic knowledge with node js and how to write docker file, the provided everything on my Github here, the code is already sample and used to deployed on heroku, but  still can be deployed on Azure using the documentation 🤔

The documentation uploaded to my Slideshare.net here

 

Cheers

Osama

 

 

Oracle Database Application Security Book

Finally …

The Book is alive

For the first time the book which is dicussed critcal security issues such as database threats, and how to void them, the book also include advance topics about Oracle internet directory, Oracle access manager and how to implement full cycle single sign on,

Focus on the security aspects of designing, building, and maintaining a secure Oracle Database application. Starting with data encryption, you will learn to work with transparent data, back-up, and networks. You will then go through the key principles of audits, where you will get to know more about identity preservation, policies and fine-grained audits. Moving on to virtual private databases, you’ll set up and configure a VPD to work in concert with other security features in Oracle, followed by tips on managing configuration drift, profiles, and default users.

What You Will Learn:- 

  • Work with Oracle Internet Directory using the command-line and the console.
  • Integrate Oracle Access Manager with different applications.
  • Work with the Oracle Identity Manager console and connectors, while creating your own custom one.
  • Troubleshooting issues with OID, OAM, and OID.
  • Dive deep into file system and network security concepts.
  • First time chapter that include most of the critical database threats in real life.

 

You can buy the book now from amazon here

 

Cheers

Osama

Cloud Computing Fundamentals Courses

Cloud Fundamentals is designed to introduce the core cloud concepts to IT Support learners. This course provides an historical perspective of how IT has evolved to the point where it is now using cloud solutions. The course examines the different types of cloud solutions that are available, as well as the basics of cloud services, cloud usage models, and cloud security. The course concludes with an introduction to Microsoft Azure, Amazon AWS and Oracle Cloud.

What you’ll learn

  • Examine core cloud concepts
  • Review basic cloud services
  • Analyze cloud usage models
  • Examine cloud security basics
  • Learn about Microsoft Azure as an IaaS and PaaS solution
  • Learn about Amazon AWS as an IaaS and PaaS solution
  • Learn about Oracle Cloud as an IaaS and PaaS solution

Course date

will start from 1st Sept 2019 – 15 Hours – 5 days 

Course time

UK TIME :- 4:00 PM

Jordan Time : 6:00 PM 

The course time can be change depends on agreement between the instructor and the students 

Course fees 

300$

Continue reading “Cloud Computing Fundamentals Courses”

Generate 10046 and 10053 trace

who didn’t face an issue with database or query and wants to know more about it ? what is going on behind that query or application ?

Oracle provides different method to do that, one of them is to enable and generate trace called 10046, the idea from using this kind of trace is that we can track the execution plan for the session and provide more information such as bin variable, more information about wait time parse and a lot of other information related to performance issues.

to generate the trace you have to follow these steps , you can use “SYS” user or any other user depends on the session, be notice that you should turn off the trace to complete gathering information, same as the below

spool check.out 
set timing on 
alter session set tracefile_identifier='NAME_OF_TRACE'; 
alter session set timed_statistics = true; 
alter session set statistics_level=all; 
alter session set max_dump_file_size = unlimited; 
alter session set events '10046 trace name context forever, level 12'; 
######
Run the query or the code here
#####
select 'close the cursor' from dual; 
alter session set events '10046 trace name context off'; 
spool off 
exit; 


Important hint :-

  • exit is very important to complete and close the trace.
  • you can change the name of the trace depends on what you want
  • Close the trace after you finished to complete gathering information.
  • We select from dual to ensure ensure the previous cursor is closed.

For Trace 10053 which is also provide information but it’s can be generated only for Hard parse SQL, which mean you should add Oracle Hint to the query to ensure the hard parse will be working.

spool check.out 
set timing on 
alter session set tracefile_identifier='NAME_OF_THE_TRACE'; 
alter session set timed_statistics = true; 
alter session set statistics_level=all; 
alter session set max_dump_file_size = unlimited; 
alter session set events '10053 trace name context forever'; 

run the problematic statement 

select 'close the cursor' from dual; 
alter session set events '10053 trace name context off'; 
spool off 
exit; 

Important hint :-

  • exit is very important to complete and close the trace.
  • you can change the name of the trace depends on what you want
  • Close the trace after you finished to complete gathering information.
  • We select from dual to ensure ensure the previous cursor is closed.

Now you can use the tkprof to make the trace more readable, tkprof located under $ORACLE_HOME/bin, Run the following command after generate the above trace

tkprof <trace-file> <output file> <Username/password>

cheers

Thank you

Using terraform to build AWS environment

What is Terraform?

What is Terraform?Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

by HashiCorp, an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency,

To configure terraform, you have to deal and create files with extension “.tf” like the following :-

  • Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Resource : – The resource block creates a resource of the given TYPE (first parameter) and NAME (second parameter). The combination of the type and name must be unique. Within the block (the { }) is configuration for the resource. The configuration is dependent on the type, and is documented for each resource type in the providers section. Providers : – A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
  • Variables: – defined by stating a name, type and a default value. However, the type and default values are not strictly necessary. Terraform can deduct the type of the variable from the default or input value.
  • VPC : which is used to define security group, subnets and ports in AWS environment.

In this post, i will do the following with terraform, you have to create and sign up for AWS account so you will be able to test this code and use terraform, what will i do here is

  • creating private subnet
  • creating public subnet
  • an SSH bastion on the public subnet only.
  • adding two ec2 to private subnets.

Let’s Start, as mentioned earlier you should have 4 files, provider.tf, resource.tf, varaibale.tf, and vpc.tf

Provider.tf

As you see from the below file, it’s contains our cloud provider and the region depends on varaible that will be defined later

# Define AWS as our provider
provider "aws" {
  region = "${var.aws_region}"}

resource.tf

The reosuce file where i create ssh key, to create it there are different way to do it, for example in my case i used puttygen then copied the key over here and save the public/private key so i can use them later, the other way, which is automaitcally generated., then i define which ami i will be used for the server/ec2 that will be created in AWS and the ID for this ami will be defiend in varaiable file,

# Define SSH key pair for the instances
resource "aws_key_pair" "default" {
  key_name = "terraform_key"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAtGCFXHeo9igKRzm3hNG5WZKXkQ3/NqQc1WPN8pSrEb0ZjW8mTKXRWzePuVYXYP9txqEKQmJJ1bk+pYX/zDdhJg/yZbZGH4V0LvDY5X5ndnAjN6CHkS6iK2EK1GlyJs6fsa+6oUeH23W2w49GHivSsCZUZuaSwdORoJk9QLeJ7Qz+9YQWOk0efOr+eIykxIDwR71SX5X65USbR8JbuT2kyrp1kVKsmPMcfy2Ehzd4VjShlHsZZlbzKTyfgaX/JJmYXO5yD4VLSjY8BVil4Yq/R9Tkz9pFxCG230XdFWCFEHSqS7TIDFbhPkp18jna6P6hlfNb9WM2gVZbYvr1MMnAVQ== rsa-key-20190805"
}

# Define a server 1 inside the public subnet
resource "aws_instance" "public_server" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.public-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpub.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "public_server"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server1" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server1"
  }
}

# Define database inside the private subnet
resource "aws_instance" "private_server2" {
   ami  = "${var.ami}"
   instance_type = "t1.micro"
   key_name = "${aws_key_pair.default.id}"
   subnet_id = "${aws_subnet.private-subnet.id}"
   vpc_security_group_ids = ["${aws_security_group.sgpriv.id}"]
   associate_public_ip_address = true
   source_dest_check = false
   user_data = "${file("userdata.sh")}"

  tags = {
    Name = "private_server2"
  }
}

variables.tf

as you see from the below, the variables file where i defined all the infomation such as AWS region, THe Subnet that will be using, the AMI ID ( you can find it by access to aws console and copy the id), finally the SSH key path in my server/ec2.

variable "aws_region" {
  description = "Region for the VPC"
  default = "ap-southeast-1"
}

variable "vpc_cidr" {
  description = "CIDR for the VPC"
  default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
  description = "CIDR for the public subnet"
  default = "10.0.1.0/24"
}

variable "private_subnet_cidr" {
  description = "CIDR for the private subnet"
  default = "10.0.2.0/24"
}

variable "ami" {
  description = "Amazon Linux AMI"
  default = "ami-01f7527546b557442"
}

variable "key_path" {
  description = "SSH Public Key path"
  default = "~/.ssh/id_rsa.pub"
}

VPC.tf

This define anything related to network, security group and subnects in AWS Cloud, as you see from the file, i assigned one ec2/public to my public subnect in the vpc, the two ec2/private to my private subnect in the vpc file, then i condifured which port will be used on the public subnect which is ssh (22), http (80), TCP (443) and ICMP, the same for private but i open the connection between public and private using ssh which mean you can access private only by access the public server this done by the subnet also open MYSQL port which is 3306

# Define our VPC
resource "aws_vpc" "default" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true

  tags  ={
    Name = "test-vpc"
  }
}

# Define the public subnet
resource "aws_subnet" "public-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.public_subnet_cidr}"
  availability_zone = "ap-southeast-1a"

  tags =  {
    Name = "PublicSubnet"
  }
}

# Define the private subnet
resource "aws_subnet" "private-subnet" {
  vpc_id = "${aws_vpc.default.id}"
  cidr_block = "${var.private_subnet_cidr}"
#  availability_zone = "ap-southeast-1"

  tags =  {
    Name = "Private Subnet"
  }
}

# Define the internet gateway
resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "VPC IGW"
  }
}

# Define the route table
resource "aws_route_table" "web-public-rt" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }

  tags =  {
    Name = "PublicSubnetRT"
  }
}

# Assign the route table to the public Subnet
resource "aws_route_table_association" "web-public-rt" {
  subnet_id = "${aws_subnet.public-subnet.id}"
  route_table_id = "${aws_route_table.web-public-rt.id}"
}

# Define the security group for public subnet
resource "aws_security_group" "sgpub" {
  name = "vpc_test_pub"
  description = "Allow incoming HTTP connections & SSH access"

  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

# The ICMP packet does not have source and destination port numbers because it was designed to 
# communicate network-layer information between hosts and routers, not between application layer processes.

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks =  ["0.0.0.0/0"]
  }
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }

  vpc_id="${aws_vpc.default.id}"

  tags =  {
    Name = "Public Server SG"
  }
}

# Define the security group for private subnet
resource "aws_security_group" "sgpriv"{
  name = "sg_test_web"
  description = "Allow traffic from public subnet"

# You can delete this port, add it her to make it as real environment
  ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = -1
    to_port = -1
    protocol = "icmp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  ingress {
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["${var.public_subnet_cidr}"]
  }

  vpc_id = "${aws_vpc.default.id}"

  tags =  {
    Name = "PrivateServerSG"
  }
}

userdata.sh

This file used to run the command that should be run using terraform.

#!/bin/sh
set -x
# output log of userdata to /var/log/user-data.log
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
yum install -y httpd
service httpd start
chkonfig httpd on
echo "<html><h3>Welcome to the Osama Website</h3></html>" > /var/www/html/index.html

Once you preapre these files, create free tier aws ec2 machines and upload these files to folder “terraform-example”, now with these files the output will be like the following :-

  • three ec2 server with the following names: private server1 , private server 2 , public server1
  • two seucrity group for public and private
  • Key pair called terraform_key
  • the AWS region will singapore.

Now Run the following command : –

terraform plan

wait the output, the command should run successfully without any errors, you can add attribute “-output NAME_OF_LOG_FILE”, this will list you the output of the command.

terraform apply

the above command will apply everything to AWS enviroment and allow you create this environment within less than 1 min.

Amazing huh ? This is Called infrastructure as code.

The files uploaded to my Github here

Cheers

Osama

TOAD :ORA-12170 when trying to connect using TOA

The following error appeared when you are trying to conenct to database using toad applicaion: –

 

1.png

Make sure of the following :-

  • Database is up and running
  • Listener is up and database is registered using.

lsnrctl status

if the above steps is done and still facing the same issue then do the following :-

  • Right Click on my computer and choose properties.
  • Advance system settings.
  • Environment Variable.

check the below entry is exsits, if not add it by press on new and follow the same steps by adding TNS_ADMIN and the location of the tnsnames.ora, sqlnet.ora into the 2nd box like the picture below.

2.png

3.png

 

Thanks

Osama