Migrating to Serverless

we’ll look at considerations for migrating existing applications to serverless and common ways for extending the serverless

At a high level, there are three migration patterns that you might follow to migrate your legacy your applications to a serverless model.

Leapfrog

As the name suggests, you bypass interim steps and go straight from an on-premises legacy architecture to a serverless cloud architecture

Organic

You move on-premises applications to the cloud in more of a “lift and shift” model. In this model, existing applications are kept intact, either running on Amazon Elastic Compute Cloud (Amazon EC2) instances or with some limited rewrites to container services like Amazon Elastic Kubernetes Service (Amazon EKS)/Amazon Elastic Container Service (Amazon ECS) or AWS Fargate.

Developers experiment with Lambda in low-risk internal scenarios like log processing or cron jobs. As you gain more experience, you might use serverless components for tasks like data transformations and parallelization of processes.

At some point in the adoption curve, you take a more strategic look at how serverless and microservices might address business goals like market agility, developer innovation, and total cost of ownership.

You get buy-in for a more long-term commitment to invest in modernizing your applications and select a production workload as a pilot. With initial success and lessons learned, adoption accelerates, and more applications are migrated to microservices and serverless.

Strangler

With the strangler pattern, an organization incrementally and systematically decomposes monolithic applications by creating APIs and building event-driven components that gradually replace components of the legacy application.

Distinct API endpoints can point to old vs. new components, and safe deployment options (like canary deployments) let you point back to the legacy version with very little risk.

New feature branches can be “serverless first,” and legacy components can be decommissioned as they are replaced. This pattern represents a more systematic approach to adopting serverless, allowing you to move to critical improvements where you see benefit quickly but with less risk and upheaval than the leapfrog pattern.

Migration questions to answer:

  • What does this application do, and how are its components organized?
  • How can you break your data needs up based on the command query responsibility (CQRS) pattern?
  • How does the application scale, and what components drive the capacity you need?
  • Do you have schedule-based tasks?
  • Do you have workers listening to a queue?
  • Where can you refactor or enhance functionality without impacting the current implementation?

Application Load Balancer vs. API Gateway for directing traffic to serverless targets

Application Load BalancerAmazon API Gateway
Easier to transition existing compute stack where you are already using an Application Load BalancerGood for building REST APIs and integrating with other services and Lambda functions
Supports authorization via OIDC-capable providers, including Amazon Cognito user poolsSupports authorization via AWS Identity and Access Management (IAM), Amazon Cognito, and Lambda authorizers
Charged by the hour, based on Load Balancer Capacity UnitsCharged based on requests served
May be more cost-effective for a steady stream of trafficMay be more cost-effective for spiky patterns
Additional features for API management: 
Export SDK for clients
Use throttling and usage plans to control access
Maintain multiple versions of an APICanary deployments

Consider three factors when comparing costs of ownership:

  • The infrastructure cost to run your workload (for example, the costs for your provisioned EC2 capacity vs. the per-invocation cost of your Lambda functions)
  • The development effort to plan, architect, and provision resources on which the application will run
  • The costs of your team’s time to maintain the application once it is in production

Reference

Cheers

Osama

Alibaba Cloud – Small Introduction

I had chance to work and test alibaba cloud, so i thought it’s good idea to write something about it since i already used AWS, Azure and OCI and this is will be my 4th cloud vendor.

Alibaba Cloud is the subsidiary of the e-commerce hub Alibaba Group. The group launched its cloud services in 2009. Today, cloud is the most ambitious project of Alibaba Group where they are investing their hard efforts to win over AWS.

The company has an exclusive range of cloud computing products and services that are divided into 7 categories of Elastic Computing and Networking, Security and Management, Database, Application Services, Domains and website, Storage and CDN and Analytics. Customers of Alibaba Cloud are eligible to get the benefits of cloud security, record breaking computing power, cloud security, safeguard your data, etc.

I really like the cloud and the portal, it’s very simple and ease of use, include to this, having a lot of different features same as AWS, you can check them from here.

the alibaba cloud known as different name also, Aliyun, Alibaba Cloud has 19 regional data centres globally, including China North, China South, China East, US West, US East, Europe, United Kingdom, Middle East, Japan, Hong Kong, Singapore, Australia, Malaysia, India, and Indonesia, right now the Data Center in Germany is operated by Vodafone Germany

Some of the clients that using this cloud : Ford, Air Aisa, Lazada, and more.

Some of the services that providing by alibaba:-

  • Elastic Computing
  • Storage & CDN
  • Networking
  • Database Services
  • Security

and will discuss each one of them in different post, the next one will be alibaba services.

Cheers

Osama

Azure Resource quick guide

In gernal,

load balancer distributes traffic evenly among each system in a pool. A load balancer can help you achieve both high availability and resiliency.

Say you start by adding additional VMs, each configured identically, to each tier. The idea is to have additional systems ready, in case one goes down, or is serving too many users at the same time.

Azure Load Balancer is a load balancer service that Microsoft provides that helps take care of the maintenance for you. Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) applications. You can use Load Balancer with incoming internet traffic, internal traffic across Azure services, port forwarding for specific traffic, or outbound connectivity for VMs in your virtual network.

When you manually configure typical load balancer software on a virtual machine, there’s a downside: you now have an additional system that you need to maintain. If your load balancer goes down or needs routine maintenance, you’re back to your original problem.

Azure Application Gateway

If all your traffic is HTTP, a potentially better option is to use Azure Application Gateway. Application Gateway is a load balancer designed for web applications. It uses Azure Load Balancer at the transport level (TCP) and applies sophisticated URL-based routing rules to support several advanced scenarios.

Benefits

  • Cookie affinity. Useful when you want to keep a user session on the same backend server.
  • SSL termination. Application Gateway can manage your SSL certificates and pass unencrypted traffic to the backend servers to avoid encryption/decryption overhead. It also supports full end-to-end encryption for applications that require that.
  • Web application firewall. Application gateway supports a sophisticated firewall (WAF) with detailed monitoring and logging to detect malicious attacks against your network infrastructure.
  • URL rule-based routes. Application Gateway allows you to route traffic based on URL patterns, source IP address and port to destination IP address and port. This is helpful when setting up a content delivery network.
  • Rewrite HTTP headers. You can add or remove information from the inbound and outbound HTTP headers of each request to enable important security scenarios, or scrub sensitive information such as server names.

What is a Content Delivery Network (CDN)?

A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. It is a way to get content to users in their local region to minimize latency. CDN can be hosted in Azure or any other location. You can cache content at strategically placed physical nodes across the world and provide better performance to end users. Typical usage scenarios include web applications containing multimedia content, a product launch event in a particular region, or any event where you expect a high-bandwidth requirement in a region.

DNS

DNS, or Domain Name System, is a way to map user-friendly names to their IP addresses. You can think of DNS as the phonebook of the internet.

How can you make your site, which is located in the United States, load faster for users located in Europe or Asia?

network latency in azure

Latency refers to the time it takes for data to travel over the network. Latency is typically measured in milliseconds.

Compare latency to bandwidth. Bandwidth refers to the amount of data that can fit on the connection. Latency refers to the time it takes for that data to reach its destination.

One way to reduce latency is to provide exact copies of your service in more than one region, or Use Traffic Manager to route users to the closest endpoint, One answer is Azure Traffic Manager. Traffic Manager uses the DNS server that’s closest to the user to direct user traffic to a globally distributed endpoint, Traffic Manager doesn’t see the traffic that’s passed between the client and server. Rather, it directs the client web browser to a preferred endpoint. Traffic Manager can route traffic in a few different ways, such as to the endpoint with the lowest latency.

Cheers

Osama

Migrating from MongoDB to Azure cosmos db, Using Mongorestore and mongodump manual/offline

In this post, i will discuss how to migrate from mongoDB (in my case the database was hosted on AWS) to Azure CosmosDB, i searched online about different articles how to do that, the problem i faced most of them were discussing the same way which is Online and using 3rd party software which is not applicable for me due to security reason, thefore i decided to post about it maybe it will useful for someone else.

Usually the easiet way which is use Azure Database Migration Service to perform an offline/online migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB’s API for MongoDB.

There are some prerequisite before start the migration to know more about it read here, the same link explained different ways for migrations, however before you start you should create an instance for Azure Cosmos DB.

Preparation of target Cosmos DB account

Create an Azure Cosmos DB account and select MongoDB as the API. Pre-create your databases through the Azure portal

The home page for azure Cloud

from the search bar just search for “Azure Cosmos DB”

Azure Cosomo DB

You have add new account for the new migration Since we are migrating from MongoDB then The API should be “Azure CosmosDB for MongoDB API”

Create cosmos db

The target is ready for migration but we have to check the connection string so we can use them in our migration from AWS to Azure.

Get the MongoDB connection string to customize

  • the Azure Cosmos DB blade, select the API.
  •  the left pane of the account blade, click Connection String.
  • The Connection String blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
Connection string

From MongoDB (Source server) you have to take backup for the database, now after the backup is completed, no need to move the backup for another server , mongo providing two way of backup either mongodump (dump) or mongoexport and will generate JSON file.

For example using monogdump

mongodump --host <hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --gzip --out /u01/user/

For mongoexport

mongoexport --host<hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --out=<Location for JSON file>

After the the above command will be finished, in my advice run them in the background specially if the database size is big and generate a log for the background process so you can check it frequently.

Run the restore/import command from the source server , do you remember the connection string, now we will use them to connect to Azure Cosmos DB using the following, if you used mongodump then to restore you have to use mongorestore like the below :-

mongorestore --host testserver.mongo.cosmos.azure.com --port 10255 -u testserver -p  w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test /u01/user/notifications_service/user_notifications.bson.gz  --gzip --ssl --sslAllowInvalidCertificates

notice the follwing :-

  • host : From Azure portal/connection string.
  • Port : From Azure portal/connection string.
  • Password : From Azure portal/connection string.
  • DB : The name of the database you want to be created in azure cosmo,this name will be created during the migration to azure.
  • Collection : The name of the collection you want to be created in azure cosmo,this name will be created during the migration to azure.
  • Location for the backup.
  • gzip because i compressed the backup
  • Migration required to use ssl authication otherwise it will fail.

using mongoimport.

mongoimport --host testserver.mongo.cosmos.azure.com:10255 -u testserver -p w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test --ssl --sslAllowInvalidCertificates --type json --file /u01/dump/users_notifications/service_notifications.json

Once you run the command

Note: if you migrating huge or big databases you need to increase the cosmosdb throughout and database level after the migration will be finished return everything to the normal situation because of the cost.

Cheers

Osama

Encryption on Azure

What is encryption?

Encryption is the process of making data unreadable and unusable to unauthorized viewers. To use or read the encrypted data, it must be decrypted, which requires the use of a secret key. 

There are two different type :-

  • Symmetric encryption :– Which mean you will use same key  to encrypt and decrypt the data
  • Asymmetric encryption :– Which mean you will use different key , for example Private and public key.

both of these two type having two different ways :-

  • Encryption at rest which mean data stored in a database, or data stored in a storage account.
  • Encryption in transit which means  data actively moving from one location to another.

So, there are different type of Encryption provided by Azure:-

  • Encrypt raw storage
    • Azure Storage Service Encryption :-  encrypts your data before persisting it to Azure Managed Disks, Azure Blob storage, Azure Files, or Azure Queue storage, and decrypts the data before retrieval.
    • Encrypt virtual machine disks low-level encryption protection for data written to physical disk
  • Azure Disk Encryption : this method helps you to encruypt the actually windows or Linux disk, the best way to do this is h Azure Key Vault.
  • Encrypt databases
    • Transparent data encryption :- helps protect Azure SQL Database and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database.

The best way to do this which is Azure Key Vault,  cloud service for storing your application secrets. Key Vault helps you control your applications’ secrets by keeping them in a single, why should i use it :-

  • Centralizing the solutions.
  • Securely stored secrets and keys.
  • Monitor access and use.
  • Simplified administration of application secrets.

There are also two different kind of certificate in Azure which will helps you to encrypt for example the website or application, you need to know that Certificates used in Azure are x.509 v3 and can be signed by a trusted certificate authority, or they can be self-signed.

Types of certificates

  • Service certificates are used for cloud services
  • Management certificates are used for authenticating with the management API

Service certificates

which is attached to cloud services and enable secure communication to and from the service. For example, if you deploy a web site, you would want to supply a certificate that can authenticate an exposed HTTPS endpoint. Service certificates, which are defined in your service definition, are automatically deployed to the VM that is running an instance of your role.

Management certificates

allow you to authenticate with the classic deployment model. Many programs and tools (such as Visual Studio or the Azure SDK) use these certificates to automate configuration and deployment of various Azure services. However, these types of certificates are not related to cloud services.

Be noted that you can use Azure Key Vault to store your certificates.

Cheers

Osama

Difference between OIM , OAM And OID ?

OAM :- Oracle Access manager 

Regarding to Oracle Documentation

Oracle Access Management is a Java, Enterprise Edition (Java EE)-based enterprise-level security application that provides a full range of Web-perimeter security functions and Web single sign-on services including identity context, authentication and authorization; policy administration; testing; logging; auditing; and more. It leverages shared platform services including session management, Identity Context, risk analytic, and auditing, and provides restricted access to confidential information.

From the above picture as you see OAM provides single point to control all resource grants in an enterprise where multiple applications exist on different platform.

You can refer to oracle Doc here.

OAM provides:

  • Single Sign On (SSO)
  • Authentication
  • Authorization
  • Access Auditing
  • Policy Administration
There is more but you can refer to the above documentation.
OIM : Oracle Identity manager 
enables enterprises to manage the entire user life cycle across all enterprise resources both within and beyond a firewall. An Oracle identity management solution provides a mechanism for implementing the user management aspects of a corporate policy. It can also be a means to audit users and their access privileges.
The best best example to understand OIM is employee.
if the new employee joining the company the HR handle everything for him emails, permission … etc, with OIM it’s different and all of this can be done automatically
Refer to Oracle Documentation here 
Finally OID : Oracle Internet Directory.
Simply it’s LDAP, 
An online directory is a specialized database that stores and retrieves collections of information about objects. The information can represent any resources that require management, for example:

Employee names, titles, and security credentials
Information about partners
Information about shared resources such as conference rooms and printers.
The information in the directory is available to different clients, such as single sign-on solutions, email clients, and database applications. Clients communicate with a directory server by means of the Lightweight Directory Access Protocol (LDAP). Oracle Internet Directory is an LDAP directory that uses an Oracle Database for storage.
OID Oracle Documentation here
Thanks
Osama

/usr/ccs/bin/as: not found/No such file or directory on Solaris 11.2

While trying to install Oracle Database 12c on Solaris 11.2 the i faced the following the errors in the logs and dbca was unable to start :-

INFO: sh[2]: /usr/ccs/bin/as: not found [No such file or directory]
INFO: make: Fatal error:
INFO: *** Error code 127

The package developer/assembler comes with default installation, But  Solaris 11 package developer/assembler is not installed.

To install it: –

pkg install developer/assembler

and try again.

Cheers
Osama