The best Oracle blogs from thousands of blogs on the web ranked by traffic, social media followers, domain authority & freshness.
Happy to share that my blog has been choosen for another year as the Top 100 Blogs around the world, the list contains talened, experience and professional people 🎉🎉🎉
You need to develop and deploy a python app that writes a new file to S3 on every execution. These files need to be maintained only for 24h.
The content of the file is not important, but add the date and time as prefix for you files name.
The name of the buckets should be the following ones for QA and Staging respectively:
qa-FIRSTNAME-LASTNAME-platform-challenge
staging-FIRSTNAME-LASTNAME-platform-challenge
The app will be running as a docker container in a Kubernetes cluster every 5 minutes. There is a Namespace for QA and a different Namespace for Staging in the cluster. You don’t need to provide tests but you need to be sure the app will work.
we’ll look at considerations for migrating existing applications to serverless and common ways for extending the serverless
At a high level, there are three migration patterns that you might follow to migrate your legacy your applications to a serverless model.
Leapfrog
As the name suggests, you bypass interim steps and go straight from an on-premises legacy architecture to a serverless cloud architecture
Organic
You move on-premises applications to the cloud in more of a “lift and shift” model. In this model, existing applications are kept intact, either running on Amazon Elastic Compute Cloud (Amazon EC2) instances or with some limited rewrites to container services like Amazon Elastic Kubernetes Service (Amazon EKS)/Amazon Elastic Container Service (Amazon ECS) or AWS Fargate.
Developers experiment with Lambda in low-risk internal scenarios like log processing or cron jobs. As you gain more experience, you might use serverless components for tasks like data transformations and parallelization of processes.
At some point in the adoption curve, you take a more strategic look at how serverless and microservices might address business goals like market agility, developer innovation, and total cost of ownership.
You get buy-in for a more long-term commitment to invest in modernizing your applications and select a production workload as a pilot. With initial success and lessons learned, adoption accelerates, and more applications are migrated to microservices and serverless.
Strangler
With the strangler pattern, an organization incrementally and systematically decomposes monolithic applications by creating APIs and building event-driven components that gradually replace components of the legacy application.
Distinct API endpoints can point to old vs. new components, and safe deployment options (like canary deployments) let you point back to the legacy version with very little risk.
New feature branches can be “serverless first,” and legacy components can be decommissioned as they are replaced. This pattern represents a more systematic approach to adopting serverless, allowing you to move to critical improvements where you see benefit quickly but with less risk and upheaval than the leapfrog pattern.
Migration questions to answer:
What does this application do, and how are its components organized?
How can you break your data needs up based on the command query responsibility (CQRS) pattern?
How does the application scale, and what components drive the capacity you need?
Do you have schedule-based tasks?
Do you have workers listening to a queue?
Where can you refactor or enhance functionality without impacting the current implementation?
Application Load Balancer vs. API Gateway for directing traffic to serverless targets
Application Load Balancer
Amazon API Gateway
Easier to transition existing compute stack where you are already using an Application Load Balancer
Good for building REST APIs and integrating with other services and Lambda functions
Supports authorization via OIDC-capable providers, including Amazon Cognito user pools
Supports authorization via AWS Identity and Access Management (IAM), Amazon Cognito, and Lambda authorizers
Charged by the hour, based on Load Balancer Capacity Units
Charged based on requests served
May be more cost-effective for a steady stream of traffic
May be more cost-effective for spiky patterns
Additional features for API management: Export SDK for clients Use throttling and usage plans to control access Maintain multiple versions of an APICanary deployments
Consider three factors when comparing costs of ownership:
The infrastructure cost to run your workload (for example, the costs for your provisioned EC2 capacity vs. the per-invocation cost of your Lambda functions)
The development effort to plan, architect, and provision resources on which the application will run
The costs of your team’s time to maintain the application once it is in production