Create a Bastion – OCI

What is a Bastion?

It’s essential to consider the security implications before allowing direct access to cloud services and resources, particularly as the latter expands. Some individuals get around this problem by setting up a virtual machine within the virtual cloud network and linking it to all the cloud services. This cuts down on publicly accessible services while facilitating connections for developers and system administrators. This virtual machine (VM) is like a manual bastion or leap box.

Create a Bastion

  • Connect to Oracle’s cloud service. To access the main menu, choose the hamburger icon in the upper left corner.
  • On the menu select “Identity & Security > Bastion”.
  • Select the compartment and click the “Create bastion” button.
  • Enter the bastion name and select the VCN and subnet for the bastion. We need to enter a CIDR block allowlist. In this case I’ve used the subnet for my IP address from my internet service provider. Click the “Create bastion” button.
  • Click on the “Create session” button.
  • Connect

Our previously copied connection information should look something like this at this point.

ssh -i  -N -L :ip-connection:22 -p 22 ocid1.bastionsession.oc1.uk-london-1.amaa...3acq@host.bastion.uk-london-1.oci.oraclecloud.com

Regards

Osama

Connect to AWS Directory Services using Apache directory studio

Apache Directory Studio is a complete directory tooling platform intended to be used with any LDAP server however it is particularly designed for use with the ApacheDS. It is an Eclipse RCP application, composed of several Eclipse (OSGi) plugins, that can be easily upgraded with additional ones.

Step 1: Create a New Connection in Apache Directory Studio

  1. Start up Apache Directory Studio.
  2. Click the LDAP icon to create a new connection.

Step 2: Enter your Connection Information

  1. Enter a name for your connection.
  2. Enter the ‘Network Parameter‘ information as follows:
HostnameThe domain name for your LDAP server. If the LDAP server is not on the same network as Crowd, you may need to use the FQDN or IP address of the LDAP server.
PortFor normal LDAP connectivity, use 389. For SSL connectivity, use 636.
Parameters for connection
  1. Click the ‘Check Network Parameter‘ button to ensure your connection is successful.

Click ‘Next‘.

Step 3: Enter your Authentication Information

  1. Choose the ‘Authentication Method‘ from the dropdown list.
  2. Enter the ‘Authentication Parameter‘ information as follows:
Bind DN or userEnter the full DN of the account that will be used to connect to the LDAP directory. This account should have the ability to browse the entire LDAP directory tree.
Bind passwordEnter the password for the Bind DN account.
Paramter for Auhentication

3. Click the ‘Check Authentication‘ button to ensure this account can authenticate.

4. If this authentication is successful, click ‘Finish‘.

Once the authentication done successfully, you can connect to the Directory services and start browsing the Base DNs for the users.

Cheers
Osama

Youtube Links To learn for free

1) Linux :
Basic Linux commands are necessary before jumping into shell scripting.

https://lnkd.in/dBTsJbhz
https://lnkd.in/dHQTiHBB
https://lnkd.in/dA9pAmHa

2. Shell Scripting:

https://lnkd.in/da_wHgQH
https://lnkd.in/d5CFPgga

3. Python: This will help you in automation

https://lnkd.in/dFtNz_9D
https://lnkd.in/d6cRpFrY
https://lnkd.in/d-EhshQz

4. Networking

https://lnkd.in/dqTx6jmN
https://lnkd.in/dRqCzbkn

5. Git & Github

https://lnkd.in/d9gw-9Ds
https://lnkd.in/dEp3KrTJ

6. YAML
https://lnkd.in/duvmhd5X
https://lnkd.in/dNqrXjmV

7. Containers — Docker:

https://lnkd.in/dY2ZswMZ
https://lnkd.in/d_EySpbh
https://lnkd.in/dPddbJTf

8. Continuous Integration & Continuous Deployment (CI/CD):

https://lnkd.in/dMHv9T8U

9. Container Orchestration — Kubernetes:
https://lnkd.in/duGZwHYX

10. Monitoring:

https://lnkd.in/dpXhmVqs
https://lnkd.in/dStQbpRX
https://lnkd.in/de4H5QVz
https://lnkd.in/dEtTSsbB

11. Infrastructure Provisioning & Configuration Management (IaC): Terraform, Ansible, Pulumi

https://lnkd.in/dvpzNT5M
https://lnkd.in/dNugwtVW
https://lnkd.in/dn5m2NKQ
https://lnkd.in/dhknHJXp
https://lnkd.in/ddNxd8vU

12. CI/CD Tools: Jenkins, GitHub Actions, GitLab CI, Travis CI, AWS CodePipeline + AWS CodeBuild, Azure DevOps, etc

https://lnkd.in/dTmSXNzv
https://lnkd.in/dAnxpVTe
https://lnkd.in/daMFG3Hq
https://lnkd.in/dqf-zzrx
https://lnkd.in/diWP7Tm7
https://lnkd.in/dYDCSiiC

13. AWS:

https://lnkd.in/dmi-TMv9
https://lnkd.in/de3-dAB6
https://lnkd.in/dh2zXZAB
https://lnkd.in/dQMyCBWy

14. Learn how to SSH
SSH using mobaxterm:

https://lnkd.in/gx-T_FU8

15. SSH using Putty :

https://lnkd.in/gGgW7Ns9

K8s Example

Create a Service Account

It’s super simple command

kubectl create sa webautomation -n web

Create a ClusterRole That Provides Read Access to Pods

  1. Define the ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Bind the ClusterRole to the Service Account to Only Read Pods in the web Namespace

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rb-pod-reader
  namespace: web
subjects:
- kind: ServiceAccount
  name: webautomation
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Cheers

Osama

OCI Basics – Putting Data into Object Storage OCI

The Object Storage service provides reliable, secure, and scalable object storage. Object storage is a storage architecture that stores and manages data as objects. Some typical use cases include data backup, file sharing, and storing unstructured data like logs and sensor-generated data.

Creating a Bucket

  1. Open the navigation menu and click Storage. Under Object Storage, click Buckets.A list of the buckets in the compartment you’re viewing is displayed.
  2. Select a compartment from the Compartment list on the left side of the page.A list of existing buckets is displayed.
  3. Click Create Bucket.
    • Bucket Name
    • Default Storage Tier: Select the default tier in which you want to store your data
      • Standard is the primary, default storage tier Use the Standard tier for storing frequently accessed data that requires fast and immediate access.
      • Archive is the default storage tier used for archive storage, Use the Archive tier for storing rarely accessed data that requires long retention periods. Access to data in the Archive tier is not immediate. Archived data must be restored before the data is accessible.
      • Object Events: Select Emit Object Events if you want to enable the bucket to emit events for object state changes. For more information about events.
      • Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys

Uploading Files to a Bucket

To upload files to your bucket using the Console:

  1. From the Object Storage Buckets screen, click the bucket name to view its details.
  2. Click Upload.
  3. In the Object Name Prefix field, optionally specify a file name prefix for the files that you plan to upload.
  4. If the Storage Tier field displays Standard, you can optionally change the storage tier to upload objects to.

Cheers

Osama

Using PersistentVolumes in Kubernetes

PersistentVolumes provide a way to treat storage as a dynamic resource in Kubernetes. This lab will allow you to demonstrate your knowledge of PersistentVolumes. You will mount some persistent storage to a container using a PersistentVolume and a PersistentVolumeClaim.

Create a custom Storage Class by using “`vi localdisk.yml`.

apiVersion: storage.k8s.io/v1 
kind: StorageClass 
metadata: 
  name: localdisk 
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

Finish creating the Storage Class by using kubectl create -f localdisk.yml.
Create the PersistentVolume by using vi host-pv.yml.

kind: PersistentVolume 
apiVersion: v1 
metadata: 
   name: host-pv 
spec: 
   storageClassName: localdisk
   persistentVolumeReclaimPolicy: Recycle 
   capacity: 
      storage: 1Gi 
   accessModes: 
      - ReadWriteOnce 
   hostPath: 
      path: /var/output

Finish creating the PersistentVolume by using kubectl create -f host-pv.yml.

Check the status of the PersistenVolume by using kubectl get pv

Create a PersistentVolumeClaim

Start creating a PersistentVolumeClaim for the PersistentVolume to bind to by using vi host-pvc.yml.

apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
   name: host-pvc 
spec: 
   storageClassName: localdisk 
   accessModes: 
      - ReadWriteOnce 
   resources: 
      requests: 
         storage: 100Mi

Finish creating the PersistentVolumeClaim by using kubectl create -f host-pvc.yml.

Check the status of the PersistentVolume and PersistentVolumeClaim to verify that they have been bound:

kubectl get pv
kubectl get pvc

Create a Pod That Uses a PersistentVolume for Storage

Create a Pod that uses the PersistentVolumeClaim by using vi pv-pod.yml.

apiVersion: v1 
kind: Pod 
metadata: 
   name: pv-pod 
spec: 
   containers: 
      - name: busybox 
        image: busybox 
        command: ['sh', '-c', 'while true; do echo Success! > /output/success.txt; sleep 5; done'] 

Mount the PersistentVolume to the /output location by adding the following, which should be level with the containers spec in terms of indentation:

volumes: 
 - name: pv-storage 
   persistentVolumeClaim: 
      claimName: host-pvc

In the containers spec, below the command, set the list of volume mounts by using:

volumeMounts: 
- name: pv-storage 
  mountPath: /output 

Finish creating the Pod by using kubectl create -f pv-pod.yml.

Check that the Pod is up and running by using kubectl get pods.

If you wish, you can log in to the worker node and verify the output data by using cat /var/output/success.txt.

Managing Container Storage with Kubernetes Volumes

Kubernetes volumes offer a simple way to mount external storage to containers. This lab will test your knowledge of volumes as you provide storage to some containers according to a provided specification. This will allow you to practice what you know about using Kubernetes volumes.

Create a Pod That Outputs Data to the Host Using a Volume

  • Create a Pod that will interact with the host file system by using vi maintenance-pod.yml.
apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
    containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
  • Under the basic YAML, begin creating volumes, which should be level with the containers spec:
volumes:
- name: output-vol
  hostPath:
      path: /var/data
  • In the containers spec of the basic YAML, add a line for volume mounts:
volumeMounts:
- name: output-vol
  mountPath: /output

The complete YAML will be

apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
  containers:
    - name: busybox
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
      - name: output-vol
        mountPath: /output
  volumes:
   - name: output-vol
     hostPath:
      path: /var/data

Create a Multi-Container Pod That Shares Data Between Containers Using a Volume

  1. Create another YAML file for a shared-data multi-container Pod by using vi shared-data-pod.yml
  2. Start with the basic Pod definition and add multiple containers, where the first container will write the output.txt file and the second container will read the output.txt file:
apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']

Set up the volumes, again at the same level as containers with an emptyDir volume that only exists to share data between two containers in a simple way:

volumes:
- name: shared-vol
  emptyDir: {}

Mount that volume between the two containers by adding the following lines under command for the busybox1 container:

volumeMounts:
- name: shared-vol
  mountPath: /output

For the busybox2 container, add the following lines to mount the same volume under command to complete creating the shared file:

volumeMounts:
- name: shared-vol
  mountPath: /input

The complete file

Finish creating the multi-container Pod using kubectl create -f shared-data-pod.yml.

apiVersion: v1
kind: Pod
metadata:
    name: shared-data-pod
spec:
    containers:
    - name: busybox1
      image: busybox
      command: ['sh', '-c', 'while true; do echo Success! >> /output/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /output
    - name: busybox2
      image: busybox
      command: ['sh', '-c', 'while true; do cat /input/output.txt; sleep 5; done']
      volumeMounts:
        - name: shared-vol
          mountPath: /input
    volumes:
    - name: shared-vol
    emptyDir: {}

And you can now apply the YAML file.

Cheers

Osama

Create a Manifest for a Static Pod

Static pods are a great way to run a pod on a single node without the involvement of the Kubernetes control plane. In this lab, you will have a chance to exercise your knowledge of static pods by creating them in an existing cluster.

sudo vi /etc/kubernetes/manifests/example.yml

Anything under this path will be managed by kubelet.

Add the following line

apiVersion: v1
kind: Pod
metadata:
  name: beebox-diagnostic
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    ports:
    - containerPort: 80

Restart kubelet to start the static pod:

sudo systemctl restart kubelet

Now if you try to delete it will work because it’s managed by kubelet.

Cheers
Osama

DaemonSets Example

Configure the cluster to create a pod on each worker node that periodically deleted the contents of the /etc/beebox/tmp on the worker node.

Very Simple by applying the following

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: beebox-cleanup
spec:
  selector:
    matchLabels:
      app: beebox-cleanup
  template:
    metadata:
      labels:
        app: beebox-cleanup
    spec:
      containers:
      - name: busybox
        image: busybox:1.27
        command: ['sh', '-c', 'while true; do rm -rf /beebox-temp/*; sleep 60; done']
        volumeMounts:
        - name: beebox-tmp
          mountPath: /beebox-temp
      volumes:
      - name: beebox-tmp
        hostPath:
          path: /etc/beebox/tmp

Apply

kubectl apply -f daemonset.yml

Cheers

Osama

K8s Types of probe 

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.

Probe outcome

Each probe has one of three results:

  • Success :- The container passed the diagnostic.
  • Failure :- The container failed the diagnostic.
  • Unknown :- The diagnostic failed (no action should be taken, and the kubelet will make further checks).

Types of probe 

The kubelet can optionally perform and react to three kinds of probes on running containers:

  • livenessProbe

Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a liveness probe, the default state is Success.

  • readinessProbe

Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a container does not provide a readiness probe, the default state is Success.

  • startupProbe

Indicates whether the application within the container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a container does not provide a startup probe, the default state is Success.

When should you use a liveness probe

If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod’s restartPolicy.

If you’d like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure.

When should you use a readiness probe

If you’d like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding.

If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.

If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is available. This helps you avoid directing traffic to Pods that can only respond with error messages.

If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe. However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you might prefer a readiness probe.

When should you use a startup probe

Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow.

If your container usually starts in more than initialDelaySeconds + failureThreshold × periodSeconds, you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks.

Cheers
Osama