Merry Christmas my friends all over the world.
#JCON2021 Session now on Youtube!
My presentation about “Automation is simple now using Devops ” is a live on YouTube Here.
The best Oracle blogs from thousands of blogs on the web ranked by traffic, social media followers, domain authority & freshness.
Happy to share that my blog has been choosen for another year as the Top 100 Blogs around the world, the list contains talened, experience and professional people 🎉🎉🎉
Thank you all for the support.
All-at-once deployments instantly shift traffic from the original (old) Lambda function to the updated (new) Lambda function, all at one time. All-at-once deployments can be beneficial when the speed of your deployments matters. In this strategy, the new version of your code is released quickly, and all your users get to access it immediately.
A linear deployment is similar to canary deployment. In this strategy, you direct a small amount of traffic to
In a canary deployment, you deploy your new version of your application code and shift a small percentage of production traffic to point to that new version. After you have validated that this version is safe and not causing errors, you direct all traffic to the new version of your code.
A linear deployment is similar to canary deployment. In this strategy, you direct a small amount of traffic to your new version of code at first. After a specified period of time, you automatically increment the amount of traffic that you send to the new version until you’re sending 100% of production traffic.
Comparing deployment strategies
To help you decide which deployment strategy to use for your application, you’ll need to consider each option’s consumer impact, rollback, event model factors, and deployment speed. The comparison table below illustrates these points.
|Deployment||Consumer Impact||Rollback||Event Model Factors||Deployment Speed|
|All-at-once||All at once||Redeploy older version||Any event model at low concurrency rate||Immediate|
|1-10% typical initial traffic shift, then phased||Revert 100% of traffic to previous deployment||Better for high-concurrency workloads||Minutes to hours|
Traffic shifting with aliases is directly integrated into AWS SAM. If you’d like to use all-at-once, canary, or linear deployments with your Lambda functions, you can embed that directly into your AWS SAM templates. You can do this in the deployment preferences section of the template. AWS CodeDeploy uses the deployment preferences section to manage the function rollout as part of the AWS CloudFormation stack update. SAM has several pre-built deployment preferences you can use to deploy your code. See the table below for examples.
|Deployment Preferences Type||Description|
|Canary10Percent30Minutes||Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 30 minutes later.|
|Canary10Percent5Minutes||Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 5 minutes later.|
|Canary10Percent10Minutes||Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 10 minutes later.|
|Canary10Percent15Minutes||Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed 15 minutes later.|
|Linear10PercentEvery10Minutes||Shifts 10 percent of traffic every 10 minutes until all traffic is shifted.|
|Linear10PercentEvery1Minute||Shifts 10 percent of traffic every minute until all traffic is shifted.|
|Linear10PercentEvery2Minutes||Shifts 10 percent of traffic every 2 minutes until all traffic is shifted.|
|Linear10PercentEvery3Minutes||Shifts 10 percent of traffic every 3 minutes until all traffic is shifted.|
|AllAtOnce||Shifts all traffic to the updated Lambda functions at once.|
When you check a piece of code into source control, you don’t want to wait for a human to manually approve it or have each piece of code run through different quality checks. Using a CI/CD pipeline can help automate the steps required to release your software deployment and standardize on a core set of quality checks.
The Idea of this project the following :
You need to develop and deploy a python app that writes a new file to S3 on every execution. These files need to be maintained only for 24h.
The content of the file is not important, but add the date and time as prefix for you files name.
The name of the buckets should be the following ones for QA and Staging respectively:
The app will be running as a docker container in a Kubernetes cluster every 5 minutes. There is a Namespace for QA and a different Namespace for Staging in the cluster. You don’t need to provide tests but you need to be sure the app will work.
I will have two presentation about the DevOps
You can register here
The hashtag in use is #APACGBT2021
we’ll look at considerations for migrating existing applications to serverless and common ways for extending the serverless
At a high level, there are three migration patterns that you might follow to migrate your legacy your applications to a serverless model.
As the name suggests, you bypass interim steps and go straight from an on-premises legacy architecture to a serverless cloud architecture
You move on-premises applications to the cloud in more of a “lift and shift” model. In this model, existing applications are kept intact, either running on Amazon Elastic Compute Cloud (Amazon EC2) instances or with some limited rewrites to container services like Amazon Elastic Kubernetes Service (Amazon EKS)/Amazon Elastic Container Service (Amazon ECS) or AWS Fargate.
Developers experiment with Lambda in low-risk internal scenarios like log processing or cron jobs. As you gain more experience, you might use serverless components for tasks like data transformations and parallelization of processes.
At some point in the adoption curve, you take a more strategic look at how serverless and microservices might address business goals like market agility, developer innovation, and total cost of ownership.
You get buy-in for a more long-term commitment to invest in modernizing your applications and select a production workload as a pilot. With initial success and lessons learned, adoption accelerates, and more applications are migrated to microservices and serverless.
With the strangler pattern, an organization incrementally and systematically decomposes monolithic applications by creating APIs and building event-driven components that gradually replace components of the legacy application.
Distinct API endpoints can point to old vs. new components, and safe deployment options (like canary deployments) let you point back to the legacy version with very little risk.
New feature branches can be “serverless first,” and legacy components can be decommissioned as they are replaced. This pattern represents a more systematic approach to adopting serverless, allowing you to move to critical improvements where you see benefit quickly but with less risk and upheaval than the leapfrog pattern.
Migration questions to answer:
Application Load Balancer vs. API Gateway for directing traffic to serverless targets
|Application Load Balancer||Amazon API Gateway|
|Easier to transition existing compute stack where you are already using an Application Load Balancer||Good for building REST APIs and integrating with other services and Lambda functions|
|Supports authorization via OIDC-capable providers, including Amazon Cognito user pools||Supports authorization via AWS Identity and Access Management (IAM), Amazon Cognito, and Lambda authorizers|
|Charged by the hour, based on Load Balancer Capacity Units||Charged based on requests served|
|May be more cost-effective for a steady stream of traffic||May be more cost-effective for spiky patterns|
|Additional features for API management: |
Export SDK for clients
Use throttling and usage plans to control access
Maintain multiple versions of an APICanary deployments
Consider three factors when comparing costs of ownership:
AWS VPN is comprised of two services:
ased on IPsec technology, AWS Site-to-Site VPN uses a VPN tunnel to pass data from the customer network to or from AWS.
One AWS Site-to-Site VPN connection consists of two tunnels. Each tunnel terminates in a different Availability Zone on the AWS side, but it must terminate on the same customer gateway on the customer side.
A resource you create and configure in AWS that represents your on-premise gateway device. The resource contains information about the type of routing used by the Site-to-Site VPN, BGP, ASN and other optional configuration information.
Customer gateway device
A customer gateway device is a physical device or software application on your side of the AWS Site-to-Site VPN connection.
Virtual private gateway
A virtual private gateway is the VPN concentrator on the Amazon side of the AWS Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the AWS Site-to-Site VPN connection.
A transit gateway is a transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the AWS Site-to-Site VPN connection.
In addition, when you connect your VPCs to a common on-premises network, it’s recommend that you use nonoverlapping CIDR blocks for your networks.
Based on OpenVPN technology, Client VPN is a managed client-based VPN service that lets you securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
Client VPN endpoint
Your Client VPN administrator creates and configures a Client VPN endpoint in AWS. Your administrator controls which networks and resources you can access when you establish a VPN connection.
VPN client application
This is the software application that you use to connect to the Client VPN endpoint and establish a secure VPN connection.
Client VPN endpoint configuration file
This is a configuration file that is provided to you by your Client VPN administrator. The file includes information about the Client VPN endpoint and the certificates required to establish a VPN connection. You load this file into your chosen VPN client application.
to learn more about this command read the link here.
For Example i need to run the following command without password prompt, However there are three sudo commands I want to run without entering password:
user host = (root) NOPASSWD: /sbin/shutdown
user host = (root) NOPASSWD: /sbin/reboot
This will allow the user user to run the desired commands on host without entering a password. All other sudoed commands will still require a password.
sudo visudo -f /etc/sudoers.d/shutdown