VPC endpoints

A VPC endpoint enables private connections between your VPC and supported AWS services without requiring an internet gateway, NAT device, VPN connection, or Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the AWS network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They permit communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

Types of VPC endpoints

GATEWAY ENDPOINT

Specify a gateway endpoint as a route target in your route table. A gateway endpoint is meant for traffic destined to Amazon S3, or Amazon DynamoDB and remains inside the AWS network.

instance A in the public subnet communicates with Amazon S3 via an internet gateway. Instance A has a route to local destinations in the VPC. Instance B communicates with an Amazon S3 bucket and an Amazon DynamoDB table using unique gateway endpoints. The diagram shows an example of a private route table. The private route table directs your Amazon S3 and DynamoDB requests through each gateway endpoint using routes. The route table uses a prefix list to target the specific Region for each service.

INTERFACE ENDPOINT

With an interface VPC endpoint (interface endpoint), you can privately connect your VPC to services as if they were in your VPC. When the interface endpoint is created, traffic is directed to the new endpoint without changes to any route tables in your VPC.

For example, a Region is shown with Systems Manager outside of the example VPC. The example VPC has a public and private subnet with an Amazon Elastic Compute Cloud (Amazon EC2) instance in each. Systems Manager traffic sent to ssm.region.amazonaws.com is sent to an elastic network interface in the private subnet.

Gateway VPC endpoints and interface VPC endpoints help you access services over the AWS backbone.

gateway VPC endpoint (gateway endpoint) is a gateway that you specify as a target for a route in your route table for traffic destined for a supported AWS service. The following AWS services are supported: Amazon S3 and Amazon DynamoDB.

An interface VPC endpoint (interface endpoint) is an elastic network interface with a private IP address from the IP address range of your subnet. The network interface serves as an entry point for traffic destined to a supported service. AWS PrivateLink powers interface endpoints and it avoids exposing traffic to the public internet.

Regards

Osama

Connect to AKS cluster nodes

sometimes you need to access AKS worker node to troubelshoot, but how to do that with AKS

Run the below command

kubectl get nodes

Output will give an idea about the worker nodes you have

Run a container image on the node by issuing the kubectl debug command in order to establish a connection to it. The following command begins the process of connecting to a privileged container that has been started on your node.

kubectl debug node/<node-name-you-wish-to-connect> -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0

Regards

Osama

AWS Infrastructure

The AWS Global Cloud Infrastructure is the most secure, extensive, and reliable cloud platform, offering over 200 fully featured services from data centers globally.

AWS Data Center

AWS pioneered cloud computing in 2006 to provide rapid and secure infrastructure. AWS continuously innovates on the design and systems of data centers to protect them from man-made and natural risks. Today, AWS provides data centers at a large, global scale. AWS implements controls, builds automated systems, and conducts third-party audits to confirm security and compliance. As a result, the most highly-regulated organizations in the world trust AWS every day.

Availability Zone – AZ

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Availability Zones are multiple, isolated areas within a particular geographic location. When you launch an instance, you can select an Availability Zone or let AWS choose one for you. If you distribute your instances across multiple Availability Zones and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests.

Region

Each AWS Region consists of multiple, isolated, and physically separate Availability Zones within a geographic area. This achieves the greatest possible fault tolerance and stability. In your account, you determine which Regions you need. You can run applications and workloads from a Region to reduce latency to end users. You can do this while avoiding the upfront expenses, long-term commitments, and scaling challenges associated with maintaining and operating a global infrastructure.

AWS Local Zone

AWS Local Zones can be used for highly demanding applications that require single-digit millisecond latency to end users. Media and entertainment content creation, real-time multiplayer gaming, and Machine learning hosting and training are some use cases for AWS Local Zones.

CloudFront – Edge Location

An edge location is the nearest point to a requester of an AWS service. Edge locations are located in major cities around the world. They receive requests and cache copies of your content for faster delivery.

Regards

Osama

Create a Bastion – OCI

What is a Bastion?

It’s essential to consider the security implications before allowing direct access to cloud services and resources, particularly as the latter expands. Some individuals get around this problem by setting up a virtual machine within the virtual cloud network and linking it to all the cloud services. This cuts down on publicly accessible services while facilitating connections for developers and system administrators. This virtual machine (VM) is like a manual bastion or leap box.

Create a Bastion

  • Connect to Oracle’s cloud service. To access the main menu, choose the hamburger icon in the upper left corner.
  • On the menu select “Identity & Security > Bastion”.
  • Select the compartment and click the “Create bastion” button.
  • Enter the bastion name and select the VCN and subnet for the bastion. We need to enter a CIDR block allowlist. In this case I’ve used the subnet for my IP address from my internet service provider. Click the “Create bastion” button.
  • Click on the “Create session” button.
  • Connect

Our previously copied connection information should look something like this at this point.

ssh -i  -N -L :ip-connection:22 -p 22 ocid1.bastionsession.oc1.uk-london-1.amaa...3acq@host.bastion.uk-london-1.oci.oraclecloud.com

Regards

Osama

Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is a service that provides object-level storage. Amazon S3 stores data as objects in buckets.

You can upload any type of file to Amazon S3, such as images, videos, text files, and so on. For example, you might use Amazon S3 to store backup files, media files for a website, or archived documents. Amazon S3 offers unlimited storage space. The maximum file size for an object in Amazon S3 is 5 TB.

Amazon S3 storage classes

With Amazon S3, you pay only for what you use. You can choose from a range of storage classes to select a fit for your business and cost needs. When selecting an Amazon S3 storage class, consider these two factors:

  • How often you plan to retrieve your data
  • How available you need your data to be

S3 Standard

  • Designed for frequently accessed data
  • Stores data in a minimum of three Availability Zones

S3 Standard provides high availability for objects. This makes it a good choice for a wide range of use cases, such as websites, content distribution, and data analytics. S3 Standard has a higher cost than other storage classes intended for infrequently accessed data and archival storage.

S3 Standard-Infrequent Access (S3 Standard-IA)

  • Ideal for infrequently accessed data
  • Similar to S3 Standard but has a lower storage price and higher retrieval price

S3 Standard-IA is ideal for data infrequently accessed but requires high availability when needed. Both S3 Standard and S3 Standard-IA store data in a minimum of three Availability Zones. S3 Standard-IA provides the same level of availability as S3 Standard but with a lower storage price and a higher retrieval price.

S3 One Zone-Infrequent Access (S3 One Zone-IA)

  • Stores data in a single Availability Zone
  • Has a lower storage price than S3 Standard-IA

Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply:

  • You want to save costs on storage.
  • You can easily reproduce your data in the event of an Availability Zone failure.

S3 Intelligent-Tiering

  • Ideal for data with unknown or changing access patterns
  • Requires a small monthly monitoring and automation fee per object

In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.

S3 Glacier

  • Low-cost storage designed for data archiving
  • Able to retrieve objects within a few minutes to hours

S3 Glacier is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files.

S3 Glacier

  • Low-cost storage designed for data archiving
  • Able to retrieve objects within a few minutes to hours

S3 Glacier is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files.

S3 Glacier Deep Archive

  • Lowest-cost object storage class ideal for archiving
  • Able to retrieve objects within 12 hours

When deciding between Amazon S3 Glacier and Amazon S3 Glacier Deep Archive, consider how quickly you need to retrieve archived objects. You can retrieve objects stored in the S3 Glacier storage class within a few minutes to a few hours. By comparison, you can retrieve objects stored in the S3 Glacier Deep Archive storage class within 12 hours.

Cheers

Osama

Youtube Links To learn for free

1) Linux :
Basic Linux commands are necessary before jumping into shell scripting.

https://lnkd.in/dBTsJbhz
https://lnkd.in/dHQTiHBB
https://lnkd.in/dA9pAmHa

2. Shell Scripting:

https://lnkd.in/da_wHgQH
https://lnkd.in/d5CFPgga

3. Python: This will help you in automation

https://lnkd.in/dFtNz_9D
https://lnkd.in/d6cRpFrY
https://lnkd.in/d-EhshQz

4. Networking

https://lnkd.in/dqTx6jmN
https://lnkd.in/dRqCzbkn

5. Git & Github

https://lnkd.in/d9gw-9Ds
https://lnkd.in/dEp3KrTJ

6. YAML
https://lnkd.in/duvmhd5X
https://lnkd.in/dNqrXjmV

7. Containers — Docker:

https://lnkd.in/dY2ZswMZ
https://lnkd.in/d_EySpbh
https://lnkd.in/dPddbJTf

8. Continuous Integration & Continuous Deployment (CI/CD):

https://lnkd.in/dMHv9T8U

9. Container Orchestration — Kubernetes:
https://lnkd.in/duGZwHYX

10. Monitoring:

https://lnkd.in/dpXhmVqs
https://lnkd.in/dStQbpRX
https://lnkd.in/de4H5QVz
https://lnkd.in/dEtTSsbB

11. Infrastructure Provisioning & Configuration Management (IaC): Terraform, Ansible, Pulumi

https://lnkd.in/dvpzNT5M
https://lnkd.in/dNugwtVW
https://lnkd.in/dn5m2NKQ
https://lnkd.in/dhknHJXp
https://lnkd.in/ddNxd8vU

12. CI/CD Tools: Jenkins, GitHub Actions, GitLab CI, Travis CI, AWS CodePipeline + AWS CodeBuild, Azure DevOps, etc

https://lnkd.in/dTmSXNzv
https://lnkd.in/dAnxpVTe
https://lnkd.in/daMFG3Hq
https://lnkd.in/dqf-zzrx
https://lnkd.in/diWP7Tm7
https://lnkd.in/dYDCSiiC

13. AWS:

https://lnkd.in/dmi-TMv9
https://lnkd.in/de3-dAB6
https://lnkd.in/dh2zXZAB
https://lnkd.in/dQMyCBWy

14. Learn how to SSH
SSH using mobaxterm:

https://lnkd.in/gx-T_FU8

15. SSH using Putty :

https://lnkd.in/gGgW7Ns9

Creating a Kubernetes Cluster Environment But this Time OCI

let’s talka about DevOps but this time on OCI, one section of it, which is kuberneters.

There are different ways to do that, either by CLI or console

Using CLI

To create a a Kubernetes cluster environment, run the create-oke-cluster-environment command:

oci devops deploy-environment create-oke-cluster-environment

Console

  1. Open the navigation menu and click Developer Services. Under DevOps, click Projects.
  2. Create project for the kuberenetes.
  3. For Environment type, select Oracle Kubernetes Engine.
  4. Enter a name and optional description for the environment.
  5. (Optional) To add tags to the environment, click Show tagging options. Tagging is a metadata system that lets you organize and track the resources in your tenancy. If you have permissions to create a resource, you also have permissions to add free-form tags to that resource. To add a defined tag, you must have permissions to use the tag namespace.
  6. Click Next.
  7. Select the region where the cluster is located.
  8. Select the compartment in which the cluster is located.
  9. Select an OKE cluster. You can select either a public or a private cluster.
  10. Click Create environment.

Cheers

Osama

K8s Example

Create a Service Account

It’s super simple command

kubectl create sa webautomation -n web

Create a ClusterRole That Provides Read Access to Pods

  1. Define the ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Bind the ClusterRole to the Service Account to Only Read Pods in the web Namespace

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rb-pod-reader
  namespace: web
subjects:
- kind: ServiceAccount
  name: webautomation
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Cheers

Osama

OCI Basics – Putting Data into Object Storage OCI

The Object Storage service provides reliable, secure, and scalable object storage. Object storage is a storage architecture that stores and manages data as objects. Some typical use cases include data backup, file sharing, and storing unstructured data like logs and sensor-generated data.

Creating a Bucket

  1. Open the navigation menu and click Storage. Under Object Storage, click Buckets.A list of the buckets in the compartment you’re viewing is displayed.
  2. Select a compartment from the Compartment list on the left side of the page.A list of existing buckets is displayed.
  3. Click Create Bucket.
    • Bucket Name
    • Default Storage Tier: Select the default tier in which you want to store your data
      • Standard is the primary, default storage tier Use the Standard tier for storing frequently accessed data that requires fast and immediate access.
      • Archive is the default storage tier used for archive storage, Use the Archive tier for storing rarely accessed data that requires long retention periods. Access to data in the Archive tier is not immediate. Archived data must be restored before the data is accessible.
      • Object Events: Select Emit Object Events if you want to enable the bucket to emit events for object state changes. For more information about events.
      • Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys

Uploading Files to a Bucket

To upload files to your bucket using the Console:

  1. From the Object Storage Buckets screen, click the bucket name to view its details.
  2. Click Upload.
  3. In the Object Name Prefix field, optionally specify a file name prefix for the files that you plan to upload.
  4. If the Storage Tier field displays Standard, you can optionally change the storage tier to upload objects to.

Cheers

Osama