OCI Basics – Putting Data into Object Storage OCI

The Object Storage service provides reliable, secure, and scalable object storage. Object storage is a storage architecture that stores and manages data as objects. Some typical use cases include data backup, file sharing, and storing unstructured data like logs and sensor-generated data.

Creating a Bucket

  1. Open the navigation menu and click Storage. Under Object Storage, click Buckets.A list of the buckets in the compartment you’re viewing is displayed.
  2. Select a compartment from the Compartment list on the left side of the page.A list of existing buckets is displayed.
  3. Click Create Bucket.
    • Bucket Name
    • Default Storage Tier: Select the default tier in which you want to store your data
      • Standard is the primary, default storage tier Use the Standard tier for storing frequently accessed data that requires fast and immediate access.
      • Archive is the default storage tier used for archive storage, Use the Archive tier for storing rarely accessed data that requires long retention periods. Access to data in the Archive tier is not immediate. Archived data must be restored before the data is accessible.
      • Object Events: Select Emit Object Events if you want to enable the bucket to emit events for object state changes. For more information about events.
      • Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys

Uploading Files to a Bucket

To upload files to your bucket using the Console:

  1. From the Object Storage Buckets screen, click the bucket name to view its details.
  2. Click Upload.
  3. In the Object Name Prefix field, optionally specify a file name prefix for the files that you plan to upload.
  4. If the Storage Tier field displays Standard, you can optionally change the storage tier to upload objects to.

Cheers

Osama

Launching Windows Instance on OCI

In this post  I will show you how to launch and connect to a Windows instance.

  • Create a cloud network and subnet that enables internet access
  • Launch an instance
  • Connect to the instance
  • Add and attach a block volume

I already posted a post how to Launch Linux Instance on OCI here, in the post you will have to follow the first two steps which is creating

  • Choose a compartment for your resources.
  • Create a cloud network.

Once you are done, you can start with steps #3 which will allow you to launch a instance – windows one.

  1. Open the navigation menu and click Compute. Under Compute, click Instances.
  2. Click Create instance.
  3. In the Placement section, accept the default Availability domain.
  4. In the Image and shape section, do the following:
    • In the Image source list, select Platform images.
    • Select Windows. Then, in the OS version list, select Server 2019 Standard.
    • Review and accept the terms of use, and then click Select image.
  5. In the Shape section, click Change Shape. Then, do the following:
    • For Instance type, accept the default, Virtual machine.
    • For Shape series, select AMD, and then choose either the VM.Standard.E4.Flex shape or the VM.Standard.E3.Flex shape (it doesn’t matter which). Accept the default values for OCPUs and memory.
    • The shape defines the number of CPUs and amount of memory allocated to the instance.
  6. In the Networking section, configure the network details for the instance. Do not accept the defaults.
    • For Primary network, leave Select existing virtual cloud network selected.
    • Select the cloud network that you created. If necessary, click Change Compartment to switch to the compartment containing the cloud network that you created.
  7. In the Boot volume section, leave all the options cleared.

Your instance now is ready.

Connect to the windows instance done by using Remote desktop, enter the public ip, username which is (opc), and the password.

Cheers

Osama

Exposing Kubernetes Pods Using Services

Kubernetes Services are a great way to combine Kubernetes networking with the dynamic and often automated nature of Kubernetes applications. In this lab, you will use Services to expose existing Kubernetes Pods. This will allow you to practice your skills with Kubernetes Services.

Expose the Pods f as an Internal Service

apiVersion: v1 
kind: Service 
metadata: 
  name: user-db-svc 
spec: 
  type: ClusterIP 
  selector: 
    app: user-db 
  ports: 
  - protocol: TCP 
    port: 80 
    targetPort: 80

Expose the Pods as an External Service

apiVersion: v1 
kind: Service 
metadata: 
  name: web-frontend-svc 
spec: 
  type: NodePort 
  selector: 
    app: web-frontend 
  ports: 
  - protocol: TCP 
    port: 80 
    targetPort: 80 
    nodePort: 30080

Create a Manifest for a Static Pod

Static pods are a great way to run a pod on a single node without the involvement of the Kubernetes control plane. In this lab, you will have a chance to exercise your knowledge of static pods by creating them in an existing cluster.

sudo vi /etc/kubernetes/manifests/example.yml

Anything under this path will be managed by kubelet.

Add the following line

apiVersion: v1
kind: Pod
metadata:
  name: beebox-diagnostic
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    ports:
    - containerPort: 80

Restart kubelet to start the static pod:

sudo systemctl restart kubelet

Now if you try to delete it will work because it’s managed by kubelet.

Cheers
Osama

K8s Types of probe 

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

Probe outcome

Each probe has one of three results:

  • Success :- The container passed the diagnostic.
  • Failure :- The container failed the diagnostic.
  • Unknown :- The diagnostic failed (no action should be taken, and the kubelet will make further checks).

Types of probe 

The kubelet can optionally perform and react to three kinds of probes on running containers:

  • livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a liveness probe, the default state is Success.

  • readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a container does not provide a readiness probe, the default state is Success.

  • startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a startup probe, the default state is Success.

When should you use a liveness probe

If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod’s restartPolicy.

If you’d like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.

When should you use a readiness probe

If you’d like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding.

If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.

If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is available. This helps you avoid directing traffic to Pods that can only respond with error messages.

If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe. However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you might prefer a readiness probe.

When should you use a startup probe

Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow.

If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.

Cheers
Osama

Upgrade k8s using kubeadm

First, upgrade the control plane node

Drain the control plane node.

kubectl drain master-node-name --ignore-daemonsets

Upgrade kubeadm.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=version

kubeadm version

Plan the upgrade.

sudo kubeadm upgrade plan v-version (for example v1.24.2)

Upgrade the control plane components.

sudo kubeadm upgrade apply v1.22.2

Upgrade kubelet and kubectl on the control plane node.

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=version 

Restart kubelet.

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Uncordon the control plane node.

kubectl uncordon master-node-name

Verify that the control plane is working

Note:- you should not perform upgrades on all worker nodes at the same time. Make sure enough nodes are available at any given time to provide uninterrupted service.

Worker nodes

Run the following on the control plane node to drain worker node 1:

kubectl drain worker1-node-name --ignore-daemonsets --force

Log in to the first worker node, then Upgrade kubeadm.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=version

Upgrade the kubelet configuration on the worker node.

sudo kubeadm upgrade node

Upgrade kubelet and kubectl on the worker node.

sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=version kubectl=verion

Restart kubelet.

sudo systemctl daemon-reload 
sudo systemctl restart kubelet

From the control plane node, uncordon worker node 1.

kubectl uncordon worker1-node-name

Repeat the upgrade process for worker nodes.

Cheers

Osama

Another year – Top 100 Oracle Blogs and Website

The best Oracle blogs from thousands of blogs on the web ranked by traffic, social media followers, domain authority & freshness.

Happy to share that my blog has been choosen for another year as the Top 100 Blogs around the world, the list contains talened, experience and professional people 🎉🎉🎉

Thank you all for the support.

Cheers

Osama

Deployment strategies

All-at-once

All-at-once deployments instantly shift traffic from the original (old) Lambda function to the updated (new) Lambda function, all at one time. All-at-once deployments can be beneficial when the speed of your deployments matters. In this strategy, the new version of your code is released quickly, and all your users get to access it immediately.

Canary

A linear deployment is similar to canary deployment. In this strategy, you direct a small amount of traffic to

In a canary deployment, you deploy your new version of your application code and shift a small percentage of production traffic to point to that new version. After you have validated that this version is safe and not causing errors, you direct all traffic to the new version of your code.

Linear

A linear deployment is similar to canary deployment. In this strategy, you direct a small amount of traffic to your new version of code at first. After a specified period of time, you automatically increment the amount of traffic that you send to the new version until you’re sending 100% of production traffic.

Comparing deployment strategies

To help you decide which deployment strategy to use for your application, you’ll need to consider each option’s consumer impact, rollback, event model factors, and deployment speed. The comparison table below illustrates these points.

DeploymentConsumer ImpactRollbackEvent Model FactorsDeployment Speed
All-at-onceAll at onceRedeploy older versionAny event model at low concurrency rateImmediate 
Canary/
Linear
1-10% typical initial traffic shift, then phased Revert 100% of traffic to previous deploymentBetter for high-concurrency workloadsMinutes to hours

Deployment preferences with AWS SAM

Traffic shifting with aliases is directly integrated into AWS SAM. If you’d like to use all-at-once, canary, or linear deployments with your Lambda functions, you can embed that directly into your AWS SAM templates. You can do this in the deployment preferences section of the template. AWS CodeDeploy uses the deployment preferences section to manage the function rollout as part of the AWS CloudFormation stack update. SAM has several pre-built deployment preferences you can use to deploy your code. See the table below for examples. 

Deployment Preferences TypeDescription
Canary10Percent30MinutesShifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 30 minutes later.
Canary10Percent5MinutesShifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 5 minutes later.
Canary10Percent10MinutesShifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 10 minutes later.
Canary10Percent15MinutesShifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.
Linear10PercentEvery10MinutesShifts 10 percent of traffic every 10 minutes until all traffic is shifted.
Linear10PercentEvery1MinuteShifts 10 percent of traffic every minute until all traffic is shifted.
Linear10PercentEvery2MinutesShifts 10 percent of traffic every 2 minutes until all traffic is shifted.
Linear10PercentEvery3MinutesShifts 10 percent of traffic every 3 minutes until all traffic is shifted.
AllAtOnceShifts all traffic to the updated Lambda functions at once.

Creating a deployment pipeline

When you check a piece of code into source control, you don’t want to wait for a human to manually approve it or have each piece of code run through different quality checks. Using a CI/CD pipeline can help automate the steps required to release your software deployment and standardize on a core set of quality checks.

Reference

Redeploy and roll back a deployment with CodeDeploy

What is AWS Lambda?

Cheers

Osama

New Cloud Project

The Idea of this project the following :

You need to develop and deploy a python app that writes a new file to S3 on every execution. These files need to be maintained only for 24h.

The content of the file is not important, but add the date and time as prefix for you files name.

The name of the buckets should be the following ones for QA and Staging respectively:

qa-FIRSTNAME-LASTNAME-platform-challenge

staging-FIRSTNAME-LASTNAME-platform-challenge

The app will be running as a docker container in a Kubernetes cluster every 5 minutes. There is a Namespace for QA and a different Namespace for Staging in the cluster. You don’t need to provide tests but you need to be sure the app will work.

Github HERE

Cheers

Osama