K8s Example

Create a Service Account

It’s super simple command

kubectl create sa webautomation -n web

Create a ClusterRole That Provides Read Access to Pods

  1. Define the ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Bind the ClusterRole to the Service Account to Only Read Pods in the web Namespace

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rb-pod-reader
  namespace: web
subjects:
- kind: ServiceAccount
  name: webautomation
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Cheers

Osama

OCI Basics – Putting Data into Object Storage OCI

The Object Storage service provides reliable, secure, and scalable object storage. Object storage is a storage architecture that stores and manages data as objects. Some typical use cases include data backup, file sharing, and storing unstructured data like logs and sensor-generated data.

Creating a Bucket

  1. Open the navigation menu and click Storage. Under Object Storage, click Buckets.A list of the buckets in the compartment you’re viewing is displayed.
  2. Select a compartment from the Compartment list on the left side of the page.A list of existing buckets is displayed.
  3. Click Create Bucket.
    • Bucket Name
    • Default Storage Tier: Select the default tier in which you want to store your data
      • Standard is the primary, default storage tier Use the Standard tier for storing frequently accessed data that requires fast and immediate access.
      • Archive is the default storage tier used for archive storage, Use the Archive tier for storing rarely accessed data that requires long retention periods. Access to data in the Archive tier is not immediate. Archived data must be restored before the data is accessible.
      • Object Events: Select Emit Object Events if you want to enable the bucket to emit events for object state changes. For more information about events.
      • Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys

Uploading Files to a Bucket

To upload files to your bucket using the Console:

  1. From the Object Storage Buckets screen, click the bucket name to view its details.
  2. Click Upload.
  3. In the Object Name Prefix field, optionally specify a file name prefix for the files that you plan to upload.
  4. If the Storage Tier field displays Standard, you can optionally change the storage tier to upload objects to.

Cheers

Osama

Launching Windows Instance on OCI

In this post  I will show you how to launch and connect to a Windows instance.

  • Create a cloud network and subnet that enables internet access
  • Launch an instance
  • Connect to the instance
  • Add and attach a block volume

I already posted a post how to Launch Linux Instance on OCI here, in the post you will have to follow the first two steps which is creating

  • Choose a compartment for your resources.
  • Create a cloud network.

Once you are done, you can start with steps #3 which will allow you to launch a instance – windows one.

  1. Open the navigation menu and click Compute. Under Compute, click Instances.
  2. Click Create instance.
  3. In the Placement section, accept the default Availability domain.
  4. In the Image and shape section, do the following:
    • In the Image source list, select Platform images.
    • Select Windows. Then, in the OS version list, select Server 2019 Standard.
    • Review and accept the terms of use, and then click Select image.
  5. In the Shape section, click Change Shape. Then, do the following:
    • For Instance type, accept the default, Virtual machine.
    • For Shape series, select AMD, and then choose either the VM.Standard.E4.Flex shape or the VM.Standard.E3.Flex shape (it doesn’t matter which). Accept the default values for OCPUs and memory.
    • The shape defines the number of CPUs and amount of memory allocated to the instance.
  6. In the Networking section, configure the network details for the instance. Do not accept the defaults.
    • For Primary network, leave Select existing virtual cloud network selected.
    • Select the cloud network that you created. If necessary, click Change Compartment to switch to the compartment containing the cloud network that you created.
  7. In the Boot volume section, leave all the options cleared.

Your instance now is ready.

Connect to the windows instance done by using Remote desktop, enter the public ip, username which is (opc), and the password.

Cheers

Osama

Using PersistentVolumes in Kubernetes

PersistentVolumes provide a way to treat storage as a dynamic resource in Kubernetes. This lab will allow you to demonstrate your knowledge of PersistentVolumes. You will mount some persistent storage to a container using a PersistentVolume and a PersistentVolumeClaim.

Create a custom Storage Class by using “`vi localdisk.yml`.

apiVersion: storage.k8s.io/v1 
kind: StorageClass 
metadata: 
  name: localdisk 
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

Finish creating the Storage Class by using kubectl create -f localdisk.yml.
Create the PersistentVolume by using vi host-pv.yml.

kind: PersistentVolume 
apiVersion: v1 
metadata: 
   name: host-pv 
spec: 
   storageClassName: localdisk
   persistentVolumeReclaimPolicy: Recycle 
   capacity: 
      storage: 1Gi 
   accessModes: 
      - ReadWriteOnce 
   hostPath: 
      path: /var/output

Finish creating the PersistentVolume by using kubectl create -f host-pv.yml.

Check the status of the PersistenVolume by using kubectl get pv

Create a PersistentVolumeClaim

Start creating a PersistentVolumeClaim for the PersistentVolume to bind to by using vi host-pvc.yml.

apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
   name: host-pvc 
spec: 
   storageClassName: localdisk 
   accessModes: 
      - ReadWriteOnce 
   resources: 
      requests: 
         storage: 100Mi

Finish creating the PersistentVolumeClaim by using kubectl create -f host-pvc.yml.

Check the status of the PersistentVolume and PersistentVolumeClaim to verify that they have been bound:

kubectl get pv
kubectl get pvc

Create a Pod That Uses a PersistentVolume for Storage

Create a Pod that uses the PersistentVolumeClaim by using vi pv-pod.yml.

apiVersion: v1 
kind: Pod 
metadata: 
   name: pv-pod 
spec: 
   containers: 
      - name: busybox 
        image: busybox 
        command: ['sh', '-c', 'while true; do echo Success! > /output/success.txt; sleep 5; done'] 

Mount the PersistentVolume to the /output location by adding the following, which should be level with the containers spec in terms of indentation:

volumes: 
 - name: pv-storage 
   persistentVolumeClaim: 
      claimName: host-pvc

In the containers spec, below the command, set the list of volume mounts by using:

volumeMounts: 
- name: pv-storage 
  mountPath: /output 

Finish creating the Pod by using kubectl create -f pv-pod.yml.

Check that the Pod is up and running by using kubectl get pods.

If you wish, you can log in to the worker node and verify the output data by using cat /var/output/success.txt.

Backing up and Restoring Kubernetes Data in etcd

Backups are an important part of any resilient system. Kubernetes is no exception. In this post , I will show you how to backup/restore kubernetes data.

Back Up the etcd Data

  1. Look up the value for the key cluster.name in the etcd cluster:
ETCDCTL_API=3 etcdctl get cluster.name \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

2. Back up etcd using etcdctl and the provided etcd certificates:

ETCDCTL_API=3 etcdctl snapshot save /home/cloud_user/etcd_backup.db \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

3. Reset etcd by removing all existing etcd data

Note: you don’t have to do this step if this is production, I am only doing this to show how to restore the data.

sudo systemctl stop etcd
sudo rm -rf /var/lib/etcd

Restore the etcd Data from the Backup

  1. Restore the etcd data from the backup (this command spins up a temporary etcd cluster, saving the data from the backup file to a new data directory in the same location where the previous data directory was):
sudo ETCDCTL_API=3 etcdctl snapshot restore /home/cloud_user/etcd_backup.db \
  --initial-cluster etcd-restore=https://10.0.1.101:2380 \
  --initial-advertise-peer-urls https://10.0.1.101:2380 \
  --name etcd-restore \
  --data-dir /var/lib/etcd

2. Set ownership on the new data directory

sudo chown -R etcd:etcd /var/lib/etcd

3. Start etcd

sudo systemctl start etcd

4. Verify the restored data is present by looking up the value for the key cluster.name again:

ETCDCTL_API=3 etcdctl get cluster.name \
  --endpoints=https://10.0.1.101:2379 \
  --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  --key=/home/cloud_user/etcd-certs/etcd-server.key

Cheers

Osama

Upgrade k8s using kubeadm

First, upgrade the control plane node

Drain the control plane node.

kubectl drain master-node-name --ignore-daemonsets

Upgrade kubeadm.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=version

kubeadm version

Plan the upgrade.

sudo kubeadm upgrade plan v-version (for example v1.24.2)

Upgrade the control plane components.

sudo kubeadm upgrade apply v1.22.2

Upgrade kubelet and kubectl on the control plane node.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=version 

Restart kubelet.

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Uncordon the control plane node.

kubectl uncordon master-node-name

Verify that the control plane is working

Note:- you should not perform upgrades on all worker nodes at the same time. Make sure enough nodes are available at any given time to provide uninterrupted service.

Worker nodes

Run the following on the control plane node to drain worker node 1:

kubectl drain worker1-node-name --ignore-daemonsets --force

Log in to the first worker node, then Upgrade kubeadm.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=version

Upgrade the kubelet configuration on the worker node.

sudo kubeadm upgrade node

Upgrade kubelet and kubectl on the worker node.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=verion

Restart kubelet.

sudo systemctl daemon-reload 
sudo systemctl restart kubelet

From the control plane node, uncordon worker node 1.

kubectl uncordon worker1-node-name

Repeat the upgrade process for worker nodes.

Cheers

Osama

k8s management tools

There is a variery of management tools that allow you to manage k8s and make your life much easier to proivde extra additional features.

  • kubectl

the official command line interface for k8s, this is the main method to interact you will use.

  • kubeadm

tools that allow you to setup control plane.

  • MiniKube

tool that runs a single-node Kubernetes cluster locally on your workstation for development and testing purposes.

Very simple tools you can find it here.

  • Helm

tool for managing packages of pre-configured Kubernetes resources. These packages are known as Helm charts.

Use Helm to:

  • Find and use popular software packaged as Kubernetes charts
  • Share your own applications as Kubernetes charts
  • Create reproducible builds of your Kubernetes applications
  • Intelligently manage your Kubernetes manifest files
  • Manage releases of Helm packages

  • Kompose

a tool to help Docker Compose users move to Kubernetes.

Use Kompose to:

  • Translate a Docker Compose file into Kubernetes objects
  • Go from local Docker development to managing your application via Kubernetes
  • Convert v1 or v2 Docker Compose yaml files or Distributed Application Bundles

and the last one which is kustomize.

Cheers

Osama

AUSOUG

The Australian Oracle User Group, AUSOUG, have a focus on bringing together our Oracle community and servicing their core technical, development and applications needs. A balanced program is aimed at all levels of skill and experience within a forum of User led independent knowledge sharing.

Register here

I will be speaking about Kubernetes in Depth But in simple way

Regards

Osama