AWS VPC Peering

A VPC peering connection is a networking connection between two VPCs that lets you route traffic between them privately.

Benefits of VPC peering

A VPC peering connection is highly available. This is because it is neither a gateway nor a VPN connection and does not rely on a separate piece of physical hardware. There is no bandwidth bottleneck or single point of failure for communication. A VPC peering connection helps to facilitate the transfer of data. 

You can establish peering relationships between VPCs across different AWS Regions. This is called inter-Region VPC peering. It permits VPC resources that run in different AWS Regions to communicate securely with each other. Examples of these resources include EC2 instances, Amazon Relational Database Service (Amazon RDS) databases, and AWS Lambda functions. This communication is accomplished using private IP addresses, without requiring gateways, VPN connections, or separate network appliances. All inter-Region traffic is encrypted with no single point of failure or bandwidth bottleneck. Traffic always stays on the global AWS backbone and never traverses the public internet, which reduces threats such as common exploits and distributed denial of service (DDoS) attacks. Inter-Region VPC peering provides an uncomplicated and cost-effective way to share resources between Regions or replicate data for geographic redundancy.

You can also create a VPC connection between VPCs in different AWS accounts.

why you would set up a VPC peering connection

Full sharing of resources between all VPCs

Your organization has company services distributed across four VPCs and a single VPC dedicated to centralized IT services and logging. To facilitate data sharing, the IT department constructed a fully mesh network design using VPC peering to connect each VPC to every other VPC in the organization.

Each VPC must have a one-to-one connection with each VPC it is approved to communicate with. This is because each VPC peering connection is nontransitive in nature and does not allow network traffic to pass from one peering connection to another.

For example, VPC 1 is peered with VPC 2, and VPC 2 is peered with VPC 4. You cannot route packets from VPC 1 to VPC 4 through VPC 2. To route packets directly between VPC 1 and VPC 4, you can create a separate VPC peering connection between them.

Partial sharing of centralized resources

Your organization’s IT department maintains a central VPC for file sharing. Multiple VPCs require access to this resource but do not need to send traffic to each other. A peering connection is established to connect the VPCs solely to this resource.

Non-valid peering configurations

Overlapping CIDR blocks

You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 Classless Inter-Domain Routing (CIDR) blocks. This limitation also applies to VPCs that have nonoverlapping IPv6 CIDR blocks. You cannot create a VPC peering connection if the VPCs have matching or overlapping IPv4 CIDR blocks. This applies even if you intend to use the VPC peering connection for IPv6 communication only.

Transitive peering

You have a VPC peering connection between VPC A and VPC B, and between VPC A and VPC C. There is no VPC peering connection between VPC B and VPC C. You cannot route packets directly from VPC B to VPC C through VPC A.

Edge-to-edge routing through a gateway or private connection

If either VPC in a peering relationship has one of the following connections, you cannot extend the peering relationship to that connection:

  • A VPN connection or a Direct Connect connection to a corporate network
  • An internet connection through an internet gateway
  • An internet connection in a private subnet through a NAT device
  • A gateway VPC endpoint to an AWS service, for example, an endpoint to Amazon S3

Cheers 🥂

Osama

Configuring Your Lambda Functions

When building and testing a function, you must specify three primary configuration settings: memory, timeout, and concurrency. These settings are important in defining how each function performs. Deciding how to configure memory, timeout, and concurrency comes down to testing your function in real-world scenarios and against peak volume. As you monitor your functions, you must adjust the settings to optimize costs and ensure the desired customer experience with your application.

Memory

You can allocate up to 10 GB of memory to a Lambda function. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. Any increase in memory size triggers an equivalent increase in CPU available to your function. To find the right memory configuration for your functions, use the AWS Lambda Power Tuning tool.

Timeout

The AWS Lambda timeout value dictates how long a function can run before Lambda terminates the Lambda function. At the time of this publication, the maximum timeout for a Lambda function is 900 seconds. This limit means that a single invocation of a Lambda function cannot run longer than 900 seconds (which is 15 minutes). 

It is important to analyze how long your function runs. When you analyze the duration, you can better determine any problems that might increase the invocation of the function beyond your expected length. Load testing your Lambda function is the best way to determine the optimum timeout value.

Lambda billing costs

With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to run. Lambda counts a request each time it starts running in response to an event notification or an invoke call, including test invokes from the console.

Duration is calculated from the time your code begins running until it returns or otherwise terminates, rounded up to the nearest 1 ms. Price depends on the amount of memory you allocate to your function, not the amount of memory your function uses. If you allocate 10 GB to a function and the function only uses 2 GB, you are charged for the 10 GB. This is another reason to test your functions using different memory allocations to determine which is the most beneficial for the function and your budget. 

In the AWS Lambda resource model, you can choose the amount of memory you want for your function and are allocated proportional CPU power and other resources. An increase in memory triggers an equivalent increase in CPU available to your function. The AWS Lambda Free Tier includes 1 million free requests per month and 400,000 GB-seconds of compute time per month.

The balance between power and duration

Depending on the function, you might find that the higher memory level might actually cost less because the function can complete much more quickly than at a lower memory configuration.

You can use an open-source tool called Lambda Power Tuning to find the best configuration for a function. The tool helps you to visualize and fine-tune the memory and power configurations of Lambda functions. The tool runs in your own AWS account—powered by AWS Step Functions—and supports three optimization strategies: cost, speed, and balanced. It’s language-agnostic so that you can optimize any Lambda functions in any of your languages. 

Concurrency and scaling

Concurrency is the third major configuration that affects your function’s performance and its ability to scale on demand. Concurrency is the number of invocations your function runs at any given moment. When your function is invoked, Lambda launches an instance of the function to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while the first request is still being processed, another instance is allocated. Having more than one invocation running at the same time is the function’s concurrency.

Concurrent invocations

As an analogy, you can think of concurrency as the total capacity of a restaurant for serving a certain number of diners at one time. If you have seats in the restaurant for 100 diners, only 100 people can sit at the same time. Anyone who comes while the restaurant is full must wait for a current diner to leave before a seat is available. If you use a reservation system, and a dinner party has called to reserve 20 seats, only 80 of those 100 seats are available for people without a reservation. Lambda functions also have a concurrency limit and a reservation system that can be used to set aside runtime for specific instances.

Concurrency types

Unreserved concurrency

The amount of concurrency that is not allocated to any specific set of functions. The minimum is 100 unreserved concurrency. This allows functions that do not have any provisioned concurrency to still be able to run. If you provision all your concurrency to one or two functions, no concurrency is left for any other function. Having at least 100 available allows all your functions to run when they are invoked.

Reserved concurrency

Guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. No charge is incurred for configuring reserved concurrency for a function.

Provisioned concurrency

Initializes a requested number of runtime environments so that they are prepared to respond immediately to your function’s invocations. This option is used when you need high performance and low latency. 

You pay for the amount of provisioned concurrency that you configure and for the period of time that you have it configured. 

For example, you might want to increase provisioned concurrency when you are expecting a significant increase in traffic. To avoid paying for unnecessary warm environments, you scale back down when the event is over.

Reasons for setting concurrency limits

Limit a function’s concurrency to achieve the following:

  • Limit costs
  • Regulate how long it takes you to process a batch of events
  • Match it with a downstream resource that cannot scale as quickly as Lambda

Reserve function concurrency to achieve the following: 

  • Ensure that you can handle peak expected volume for a critical function 
  • Address invocation errors

CloudWatch metrics for concurrency

When your function finishes processing an event, Lambda sends metrics about the invocation to Amazon CloudWatch. You can build graphs and dashboards with these metrics in the CloudWatch console. You can also set alarms to respond to changes in use, performance, or error rates.

CloudWatch includes two built-in metrics that help determine concurrency: ConcurrentExecutions and UnreservedConcurrentExecutions.

ConcurrentExecutions

Shows the sum of concurrent invocations for a given function at a given point in time. Provides historical data on how functions are performing. 

You can view all functions in the account or only the functions that have a custom concurrency limit specified.

UnreservedConcurrentExecutions

Shows the sum of the concurrency for the functions that do not have a custom concurrency limit specified.

Enjoy the Cloud

Osama

Cheers

AWS ECS Project

This is another DevOps Project, the idea of this project like the following:-

sample django web application on with the following specs:

  • app should be production ready taking into consideration things such as scalability, availability & security.
  • The infrastructure to run this application is up to you but it should be automated via terraform or cloud formation. Infrastructure well architected framework will be used to evaluate the infrastructure as a whole.
  • CI/CD Pipeline
  • harden the application for a production ready environment.

The complete Project uploaded to my GitHub HERE.

Thank you

Enjoy the automation

Osama

DevOps Project – Complete auto deployment and IAAC

The following project having these requriment:-

  • I want to have a CI/CD pipeline for my application the pipeline must build and test the application code base.
  • The pipeline must build and push a Docker container ready to use.
  • The pipeline must deploy the application across different environments on the target infrastructure.
  • Separate the backend and the frontend in different pipelines and containers.

Another Things to add to this project which is the follwong :-

  • the infrastructure must be created on the cloud, for the purpose of the assignment any public cloud can be used.
  • The deployment pipeline must use infrastructure as code using Terraform
  • The delivered infrastructure must be monitored and audited.
  • The delivered infrastructure must allow multiple personal accounts.
  • The delivered infrastructure must be able to scale automatically.
  • Modify the application to make use of real database running on the cloud, instead of the in-memory database.

The Link for the Project HERE

Enjoy the automation

Osama

ansible provisioning for instance (Cloud Version) Using Pipelines

Imagine you are having multiple instances and you want to change something, if you will do this manually it will take time from you why not to automte the process ?

I upladed one of the projects to automate the process, this will allow to automate the simplest things for example new employee joined and you need to add his SSH key to your instances (You can even choose which VM you want to him/her to acces) , just add the key in the roles and configure the pipeline on your rep and the code will run Automtically.

I uploaded the project on my github HERE.

Ragards

Enjoy the power of automation.

Osama

Docker & kubernetes example – Full project for free

Okay, I love to post free examples/projects on my github from while to while, i choose docker and kubernetes this time, the project idea it’s very nice and easy to implement.

What this project do ?

This can be a simple web app that reads a ‘hello world’ string from the MySQL database.Run a database app. Data volume should be persistent.Application from step 1 needs to discover the database from step 2 using Kubernetes native features.
Database credentials should NOT be hardcoded in application or helm chart code.The application should be accessible from the outside of Kubernetes.Create a helm chart which implements all these steps

  • Create an application that connects to a database, reads some data, and returns this data upon HTTP request, This can be a simple web app that reads a ‘hello world’ string from the MySQL database.
  • Run a database app. Data volume should be persistent.
  • Application from step 1 needs to discover the database from step 2 using Kubernetes native features, Database credentials should NOT be hardcoded in application or helm chart code.
  • The application should be accessible from the outside of Kubernetes.
  • Create a helm chart which implements all these steps

I Choose to use Java as programing language because the springboot framework it’s already defined and easy to use.

Please follow the readme file and everything will working fine without any issue, if you have any question comment below and i will answer

GitHub Link HERE

Enjoy the free learning

Osama

Setting up a Jenkins-Based Continuous Delivery Pipeline with Docker

As an important step in agile development, continuous integration is designed to maintain high quality while accelerating product iteration. Every time when the codes are updated, an automatic test is performed to test the codes and function validity. The codes can only be delivered and deployed after they pass the automatic test, This post describes how to combine Jenkins, one of the most popular integration tools, with Alibaba Cloud Container Service to realize automatic test and image building pushing.

1

Deploying Jenkins Applications and the Slave Nodes

1. Create a Jenkins orchestration template.

Create a new template and create the orchestration based on the following content.

jenkins:  image: 'registry.aliyuncs.com/acs-sample/jenkins:latest'  ports:      - '8080:8080'      - '50000:50000'  volumes:      - /var/lib/docker/jenkins:/var/jenkins_home  privileged: true  restart: always   labels:      aliyun.scale: '1'      aliyun.probe.url: 'tcp://container:8080'      aliyun.probe.initial_delay_seconds: '10'      aliyun.routing.port_8080: jenkins  links:      - slave-nodejs slave-nodejs:  image: 'registry.aliyuncs.com/acs-sample/jenkins-slave-dind-nodejs'  restart: always   volumes:      - /var/run/docker.sock:/var/run/docker.sock  labels:      aliyun.scale: '1' 

2. Use the template to create Jenkins applications and slave nodes.

You can also directly use a Jenkins sample template provided by Alibaba Cloud Container Service to create Jenkins applications and slave nodes.

2

3. After the successful creation, Jenkins applications and slave nodes will be displayed in the service list.

3

4. After opening the access endpoint provided by the Container Service, you can use the Jenkins application deployed just now.

4

Realizing Automatic Test and Automatic Build and Push of Image

Configure the slave container as the slave node of the Jenkins application.

Open the Jenkins application and enter the System Settings interface. Select Manage Node > Create Node, and configure corresponding parameters. See the figure below.

5

Note: Label is the only identifier of the slave. The slave container and Jenkins container run on the Alibaba Cloud platform at the same time. Therefore, you can fill in a container node IP address that is inaccessible to the Internet to isolate the test environment.

6

Use the jenkins account and password (the initial password is jenkins) in Dockerfile for the creation of the slave-nodejs image when adding Credential. Image Dockerfile address HERE

1. Create a project to implement the automatic test.

  1. Create an item and choose to build a software project of free style.
  2. Enter the project name and select a node for running the project. In this example, enter the slave-nodejs-ut node created above.
7

Configure the source code management and code branch. In this example, use GitHub to manage source codes.

8

Configure the trigger for building. In this example, automatically trigger project execution by combining GitHub Webhooks and services.

9

Add the Jenkins service hook to GitHub to implement automatic triggering.

Click the Settings tab on the Github project homepage, and click Webhooks & services > Add service and select Jenkins (Git plugin). Enter ${Jenkins IP}/github-webhook/ in the Jenkins hook URL dialog box.

1. http://jenkins.cd****************.cn-beijing.alicontainer.com/github-webhook/
10

Add a build step of Executes shell type and write shell scripts to execute the test.

11

The command in this example is as follows.

1. pwd
2. ls
3. cd chapter2
4. npm test

Create a project to automatically build and push images.

  1. Create an item and choose to build a software project of free style.
  2. Enter the project name and select a node for running the project. In this example, enter the slave-nodejs-ut node created above.
  3. Configure the source code management and code branch. In this example, use GitHub to manage source codes.
  4. Add the following trigger and set it to implement automatic image building only after success of the unit test.
12

Write shell scripts for building and pushing images.

13

The command in this example is as follows.

a.cd chapter2 b.docker build -t registry.aliyuncs.com/qinyujia-test/nodejs-demo . c.docker login -u ${yourAccount} -p ${yourPassword} registry.aliyuncs.com d.docker push registry.aliyuncs.com/qinyujia-test/nodejs-demo 

Automatically Redeploy the Application

Deploy the application for the first time

Use the orchestration template to deploy the image created above to the Container Service and create the nodejs-demo application.

Example

1. 
2. express:
3. image: 'registry.aliyuncs.com/qinyujia-test/nodejs-demo'
4. expose:
5. - '22'
6. - '3000'
7. restart: always
8. labels:
9. aliyun.routing.port_3000: express
10. 

1. Select the application nodejs-demo just created, and create the trigger.

14

 Add a line to the shell scripts you wrote in Realize automatic test and automatic build and push of image. The address is the trigger link given by the trigger created above.

i.curl 'https://cs.console.aliyun.com/hook/trigger?triggerUrl=***==&secret=***' 

Change the Command in the example from Realize automatic test and automatic build and push of image as follows.

i. cd chapter2
ii. docker build -t registry.aliyuncs.com/qinyujia-test/nodejs-demo .
iii. docker login -u ${yourAccount} -p ${yourPassword} registry.aliyuncs.com iv.docker push registry.aliyuncs.com/qinyujia-test/nodejs-demo
v. curl 'https://cs.console.aliyun.com/hook/trigger?triggerUrl=***==&secret=***'

After pushing the image, Jenkins automatically triggers redeployment of the nodejs-demo application.

Configure The Email Notification for the Results

If you want to send the unit test or image configuration results to relevant developers or project execution initiators through email, perform the following configurations.

On the Jenkins homepage, click System Management > System Settings, and configure a Jenkins system administrator email.

15

Install the Extended Email Notification plugin, configure SMTP server and other relevant information, and set the default recipient list. See the figure below.

16

The above example shows the parameter settings of the Jenkins application system. The following example shows the relevant configurations for Jenkins projects whose results are to be pushed through email.

1. Add post-building operation steps in the Jenkins project, select Editable Email Notification, and enter a recipient list.

17

2. Add a mailing trigger.

18

Cheers

Osama

Create a Serverless Website with Alibaba Cloud Function Compute

Regarding to Wikipedia, Serverless computing is a cloud computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity

Today i will show you an example how to create serverless website but this time not using Amazon AWS, Azure or OCI but Alibaba Cloud Provider.

Create a Function Compute Service

Go to the console page and click through to Function Compute.

Click the add button beside Services.

In the Service slide out, give your service a name, an optional description, and then slide open the Advanced Settings.

In Advanced Settings you can grant access for Functions to the Internet, to VPC resources, and you can attach storage and a log service to a Function. You can also configure roles.

For our tutorial, we will need Internet access so make sure this configuration is on.

We will leave VPC and Log Configs as they are.

In the Role Config section, select Create New Role, and in the dropdown list pick AliyunOSSReadOnlyAccess as we will be accessing our static webpages from an Object Storage Service bucket.

Click Authorize.

You will see a summary of the Role you created.

Click Confirm Authorization Policy.

You have successfully added the Role to the Service.

Click OK.

ou will see the details of the Function Compute Service you just created.

Now let’s create a Function in the Service. Click the add button next to Functions.

You will see the Create Function process. The first part of the process is Function Template.

There are many Function Templates available, including an empty Function for writing your own bespoke Functions.

Alibaba Cloud-supplied Template Functions are very useful as they have relevant method invocation and demo code for getting started quickly with Function Compute.

let’s choose the flask-web Function written in Python2.7.

Click Select.

We are now at the Configure Triggers section of creating a Function.

Select HTTP Trigger from the dropdown list. Give the Trigger a name and choose Authorization details (anonymous does not require authorization).

Choose your HTTP methods and click Next. We are going to build a simple web-form application so we will need both the GET and POST HTTP methods.

Now we arrive at the Configure Function Settings.

Give the Function a name then scroll down to Code details.

We’ll leave the supplied code for now. Scroll down to below the code sample.

You will see Environment Variable input options and Runtime Environment details.

Click Next.

Click Next at Configure Function Permissions.

Verify the Configuration details and click Create.

You will arrive at the Function’s IDE. Here you can enter new code, edit the code directly, upload code folders, run, test, and fix your code.

Scroll down.

Copy the URL as we will need to add this to our static webpages so they can connect to our Function Compute Service and Function.

Set Up and Configure an OSS Bucket

Click through to Object Storage Service on the Products page.

If you haven’t yet activated Object Storage Service, go ahead and activate it. In the OSS console, click Create Bucket.

Choose a name for the OSS Bucket and pick the region – you cannot change the region later. Select the Storage Class – you also cannot change this later.

We have selected Public Read for the Access Control List.

When you’re ready, click OK.

You will see the Overview page for your bucket. Make a note of the public Internet URL.

In the Files tab, upload your static web files.

I uploaded a simple index.html homepage and a background picture.

<script type="text/javascript">
        const functionURL = '<<Function URL>>';
        const doHome = new XMLHttpRequest();
doHome.open('GET', functionURL, true);
doHome.onload = function () {    
document.getElementById('home_message').innerHTML = doHome.responseText;
        };
        doHome.send();
</script>

In Basic Settings, click Configure to configure your Static Pages.

Add the homepage details and click Save.

Now go to a new browser window and access the OSS URL you saved earlier.

Back in the Function Compute console, you can now test the flask-app paths directly from the code.

We already tested index.html with no Path variable. Next, we test the app route signin with GET and check the Headers and status code.

The signin page code is working correctly. You can also check the Body to make sure the correct HTML will render on the page. Notice that because I entered the path variable, signin is appended to the URL.

Of course, any errors you encounter will show up in the Logs section for easy debugging.

Now, let’s test this page on the Internet.

If you get an error here, implement a soft link for the page in OSS. Go to the OSS bucket and click More dropdown for the HTML file in question and choose Set soft link.

Give the link a name and click OK.

A link file will appear in the list of static files and you will now be able to access the page online with the relevant soft link and it will render as above.

Back in Function Compute, we can test the POST method in the console with the correct username and password details in the same way.

Add the POST variables to the form upload section in the Body tab.

Now you can test this function online.

Cheers

Osama