One of the bigges conference in europe, if you want to learn something new, DON’T MISS THE CHANCE

The Agenda is here
Register now here
See you there
Cheers
Osama
For the people who think differently Welcome aboard

I will have two presentation about the DevOps
You can register here
The hashtag in use is #APACGBT2021
Enjoy
Cheers

Quest Oracle Community is home to 25,000+ users of JD Edwards, PeopleSoft, Oracle Cloud apps and Oracle Database products. We connect Oracle users to technology leaders and Oracle experts from companies who are driving innovation and leading through their use of Oracle products.
The Quest Oracle Community is dedicated to helping Oracle users develop skills and expand knowledge by connecting with other Oracle users and experts for education and networking.
I will present about the automation
You can register for the event from here
Thank you
Another conference to share the knowledge it’s great to network with people who wants to learn something about the tech.
Michigan Oracle Users Summit Monday, October 25, 2021 through Thursday October 28, 2021 Register now here

Happy to share the List with these great speaker

Cheers
Osama

I will be speak Build Up – DevOps Edition, it’s more disussion like about DevOps and why it’s important now , Don’t forget to register and learn something new.
The Link here

Regards
Osama
Continue to pervious post of Configure Kubernetes on my blog.
This post will discuss how to scale the pods, I will assume the Kubernetes installed if not back to the above post.
If you did these steps below , you can skip
Initialize the cluster
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.11.3
As mentioned the command will generate commands like the picture.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Flannel is an open-source virtual network project managed by CoreOS network designed for Kubernetes. Each host in a flannel cluster runs an agent called flanneld . It assigns each host a subnet, which acts as the IP address pool for containers running on the host.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
vi deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: httpd
spec:
replicas: 3
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 80
kubectl create -f deployment.yml
vim service.yml
kind: Service
apiVersion: v1
metadata:
name: service-deployment
spec:
selector:
app: httpd
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
kubectl create -f service.yml
vi deployment.yml
Change the number of replicas to 5:
spec: replicas: 5
kubectl apply -f deployment.yml
Enjoy
Hope it’s useful
Osama
Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.
STEPS :-
Configure Docker to user Syslog
In the file editor, uncomment the two lines under `Provides UDP syslog reception` by removing `#`.
#ModLoad imudp
#UDPServerRun 514
Then
systemctl start rsyslog
sudo mkdir /etc/docker
vi /etc/docker/daemon.json
{ "log-driver":
"syslog",
"log-opts": {
"syslog-address": "udp://<PRIVATE_IP>:514" }
}
Then
systemctl start docker
Time to use for docker
For example , first method
docker container run -d --name syslog-logging httpd
Check by
docker logs syslog-logging
Or
tail /var/log/messages
second way to use the enable logging
docker container run -d --name json-logging --log-driver json-file httpd
Check
docker logs json-logging
Docker power 👌
Enjoy
Osama
I am posting about docker a lot recently because these blog post been in my Drafts for long time waiting someone to finish them 😅 and Finally I had chance to do that.
I posted recently about storing container data in AWS s3 , but this time the same topic but for Google Cloud and how to configure that.
Configuration
export projnum=$(curl http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id -sH "Metadata-Flavor: Google")
export BUCKET="Osama-${projnum}"
echo $projnum
echo $BUCKET
gsutil mb -l us-central1 -c standard gs://$BUCKET
sudo yum install -y gcsfuse
sudo sed -ri 's/# user_allow_other/user_allow_other/' /etc/fuse.conf
sudo mkdir /mnt/Osama /tmp/gcs
sudo chown Osama: /mnt/Osama/ /tmp/gcs
gcsfuse -o allow_other --temp-dir=/tmp/gcs $BUCKET /mnt/Osama/
gsutil ls gs://$BUCKET
Use Bucket inside the container
docker run -d --name web1 --mount type=bind,source=/mnt/Osama,target=/usr/local/apache2/htdocs,readonly -p 80:80 httpd:2.4
Enjoy
Simple and Fun 😉
Osama
What is docker compose ?
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Just quick post with example about docker-compose file to show you how much powerful this instead of running docker compose
version: '3'
services:
ghost:
image: ghost:1-alpine
container_name: ghost-blog
restart: always
ports:
- 80:2368
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: root
database__connection__password: P4sSw0rd0!
database__connection__database: ghost
volumes:
- ghost-volume:/var/lib/ghost
depends_on:
- mysql
mysql:
image: mysql:5.7
container_name: ghost-db
restart: always
environment:
MYSQL_ROOT_PASSWORD: P4sSw0rd0!
volumes:
- mysql-volume:/var/lib/mysql
volumes:
ghost-volume:
mysql-volume:
2. Run
docker-compose up -d
Finished
Osama
Imagine you are having multiple instances and you want to change something, if you will do this manually it will take time from you why not to automte the process ?
I upladed one of the projects to automate the process, this will allow to automate the simplest things for example new employee joined and you need to add his SSH key to your instances (You can even choose which VM you want to him/her to acces) , just add the key in the roles and configure the pipeline on your rep and the code will run Automtically.
I uploaded the project on my github HERE.
Ragards
Enjoy the power of automation.
Osama