Again, But this time virtual, I remember the tour before three years, one of the most fantastic trip, met new people and friends, this time it will be virtual due to Coronavirus, great topics, and Geeks
Register now and don’t miss it there is always time to learn something new.
If you ever worked with cloud and configured different subnet, there will be public and private subnet, both having a different number of servers, for the public or even private have you also wonder how to access the environment without associate the VM to Public IP, in this post I will show you how.
For the figure shows one of the simple example of that, In this post you will learn how to connect to an instance that is hosted in a private subnet
This blog post is one of that kind that took much time and consume so much energy, to complete this post it took me around ten days to make sure that I will cover most of the available services and make it readable for people, Be sure the services can change while you are reading this post ; if you have any comments,or add something to this post, please send me an email – using contact us page or by comments below.
I am writing this post to share a different cloud providers services and the comparison between each one of them, this will show various naming services for each one of them.
Earlier we used to store our data to H.D.D or USB flash, Cloud Computing services have replaced such hard drive technology. Cloud Computing service is nothing but providing services like Storage, Databases, Servers, networking, and software through the Internet.
Cloud Computing is moving so fast, in 2020 the cloud now is more mature, going multi-cloud, and likely to become more focused on vertical and a sales ground war as the leading vendors battle for market share.
Notes :
GCP : Google Cloud Provider
OCI :- oracle cloud infrastructure
None : not meaning the services is not available necessarily by cloud provider but i didn’t look deeper into this or i didn’t use it before.
Just quick post to show and share what services for each cloud provider, be notice that the services can be change while we are talking now, and this is not a complete list of services but it’s only shows the basic one.
This post for error trying to integrate Jenkins with github, i will post about that for sure as video and Blog post, but for this error i will like the following :-
This is when you try to to do any changes from the github side, it’s not reflecting on Jenkins side, this is simple because of the webhook (payload) not including the “/” at the end
A load balancer distributes traffic evenly among each system in a pool. A load balancer can help you achieve both high availability and resiliency.
Say you start by adding additional VMs, each configured identically, to each tier. The idea is to have additional systems ready, in case one goes down, or is serving too many users at the same time.
Azure Load Balancer is a load balancer service that Microsoft provides that helps take care of the maintenance for you. Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) applications. You can use Load Balancer with incoming internet traffic, internal traffic across Azure services, port forwarding for specific traffic, or outbound connectivity for VMs in your virtual network.
When you manually configure typical load balancer software on a virtual machine, there’s a downside: you now have an additional system that you need to maintain. If your load balancer goes down or needs routine maintenance, you’re back to your original problem.
Azure Application Gateway
If all your traffic is HTTP, a potentially better option is to use Azure Application Gateway. Application Gateway is a load balancer designed for web applications. It uses Azure Load Balancer at the transport level (TCP) and applies sophisticated URL-based routing rules to support several advanced scenarios.
Benefits
Cookie affinity. Useful when you want to keep a user session on the same backend server.
SSL termination. Application Gateway can manage your SSL certificates and pass unencrypted traffic to the backend servers to avoid encryption/decryption overhead. It also supports full end-to-end encryption for applications that require that.
Web application firewall. Application gateway supports a sophisticated firewall (WAF) with detailed monitoring and logging to detect malicious attacks against your network infrastructure.
URL rule-based routes. Application Gateway allows you to route traffic based on URL patterns, source IP address and port to destination IP address and port. This is helpful when setting up a content delivery network.
Rewrite HTTP headers. You can add or remove information from the inbound and outbound HTTP headers of each request to enable important security scenarios, or scrub sensitive information such as server names.
What is a Content Delivery Network (CDN)?
A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. It is a way to get content to users in their local region to minimize latency. CDN can be hosted in Azure or any other location. You can cache content at strategically placed physical nodes across the world and provide better performance to end users. Typical usage scenarios include web applications containing multimedia content, a product launch event in a particular region, or any event where you expect a high-bandwidth requirement in a region.
DNS
DNS, or Domain Name System, is a way to map user-friendly names to their IP addresses. You can think of DNS as the phonebook of the internet.
How can you make your site, which is located in the United States, load faster for users located in Europe or Asia?
network latency in azure
Latency refers to the time it takes for data to travel over the network. Latency is typically measured in milliseconds.
Compare latency to bandwidth. Bandwidth refers to the amount of data that can fit on the connection. Latency refers to the time it takes for that data to reach its destination.
One way to reduce latency is to provide exact copies of your service in more than one region, or Use Traffic Manager to route users to the closest endpoint, One answer is Azure Traffic Manager. Traffic Manager uses the DNS server that’s closest to the user to direct user traffic to a globally distributed endpoint, Traffic Manager doesn’t see the traffic that’s passed between the client and server. Rather, it directs the client web browser to a preferred endpoint. Traffic Manager can route traffic in a few different ways, such as to the endpoint with the lowest latency.
In this post, i will discuss how to migrate from mongoDB (in my case the database was hosted on AWS) to Azure CosmosDB, i searched online about different articles how to do that, the problem i faced most of them were discussing the same way which is Online and using 3rd party software which is not applicable for me due to security reason, thefore i decided to post about it maybe it will useful for someone else.
Usually the easiet way which is use Azure Database Migration Service to perform an offline/online migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB’s API for MongoDB.
There are some prerequisite before start the migration to know more about it read here, the same link explained different ways for migrations, however before you start you should create an instance for Azure Cosmos DB.
Preparation of target Cosmos DB account
Create an Azure Cosmos DB account and select MongoDB as the API. Pre-create your databases through the Azure portal
The home page for azure Cloud
from the search bar just search for “Azure Cosmos DB”
Azure Cosomo DB
You have add new account for the new migration Since we are migrating from MongoDB then The API should be “Azure CosmosDB for MongoDB API”
Create cosmos db
The target is ready for migration but we have to check the connection string so we can use them in our migration from AWS to Azure.
Get the MongoDB connection string to customize
the Azure Cosmos DB blade, select the API.
the left pane of the account blade, click Connection String.
The Connection String blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
Connection string
From MongoDB (Source server) you have to take backup for the database, now after the backup is completed, no need to move the backup for another server , mongo providing two way of backup either mongodump (dump) or mongoexport and will generate JSON file.
For example using monogdump
mongodump --host <hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --gzip --out /u01/user/
For mongoexport
mongoexport --host<hostname:port> --db <Databasename that you want to backup > --collection <collectionname> --out=<Location for JSON file>
After the the above command will be finished, in my advice run them in the background specially if the database size is big and generate a log for the background process so you can check it frequently.
Run the restore/import command from the source server , do you remember the connection string, now we will use them to connect to Azure Cosmos DB using the following, if you used mongodump then to restore you have to use mongorestore like the below :-
mongorestore --host testserver.mongo.cosmos.azure.com --port 10255 -u testserver -p w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test /u01/user/notifications_service/user_notifications.bson.gz --gzip --ssl --sslAllowInvalidCertificates
notice the follwing :-
host : From Azure portal/connection string.
Port : From Azure portal/connection string.
Password : From Azure portal/connection string.
DB : The name of the database you want to be created in azure cosmo,this name will be created during the migration to azure.
Collection : The name of the collection you want to be created in azure cosmo,this name will be created during the migration to azure.
Location for the backup.
gzip because i compressed the backup
Migration required to use ssl authication otherwise it will fail.
using mongoimport.
mongoimport --host testserver.mongo.cosmos.azure.com:10255 -u testserver -p w3KQ5ZtJbjPwTmxa8nDzWhVYRuSe0BEOF8dROH6IUXq7rJgiinM3DCDeSWeEdcOIgyDuo4EQbrSngFS7kzVWlg== --db test --collection test --ssl --sslAllowInvalidCertificates --type json --file /u01/dump/users_notifications/service_notifications.json
Once you run the command
Note: if you migrating huge or big databases you need to increase the cosmosdb throughout and database level after the migration will be finished return everything to the normal situation because of the cost.
This post provide steps for downloading and installing both Terraform and the Oracle Cloud Infrastructure Terraform provider.
Terraform Overview
Terraform is “infrastructure-as-code” software that allows you to define your infrastructure resources in files that you can persist, version, and share. These files describe the steps required to provision your infrastructure and maintain its desired state; it then executes these steps and builds out the described infrastructure.
Infrastructure as Code is becoming very popular. It allows you to describe a complete blueprint of a datacentre using a high-level configuration syntax, that can be versioned and script-automated, Terraform can seamlessly work with major cloud vendors, including Oracle, AWS, MS Azure, Google, etc
Download and Install Terraform
In this section, i will show and explain how to download and install Terraform on your laptop/PC Host Operating System, you can download using the below link :-
After you download the terraform, Unzip the Terraform to whatever location you want to run it from. Then, add that location to your OS PATH.
Windows : By adding to Path –> environment variables
Linux : Profile –> export Path
You can check by run the CMD and check the version:-
Check Terraform commands
Download the OCI Terraform Provider
Prerequisites:-
OCI User credentials that has sufficient permission to execute a Terraform plan.
Required keys and Oracle Cloud Infrastructure IDs (OCIDs).
The correct Terraform binary file for your operating system
Installing and Configuring the Terraform Provider
In my personal opioion about this section (The title of the section same as Oracle Documentation) I found it wrong, i worked with Terraform in different cloud vendor, AWS, Azure and OCI so Terraform will recognize it and automatically install the provider for you.
to do that, all of you have to do is create folder , then create file “variables.tf” that only contains
provider "oci" {<br>}
and run terraform command
terraform init
Now Let’s Talk small examples about OCI and Terraform, First you have to read “Creating Module” to understand the rest of this post here.
I will upload to my Github here Small Sample for OCI Terraform to allow you underatand how we can use it instead of the GUI and make it easy for you.
I upload to my github example of Terraform for OCI Proiver, In the this example i will create autonomous database but not using the GUI,
to work with Terraform, you have to understand what is the OCI Provider and the parameters of it.
The Terraform configuration resides in two files: variables.tf (which defines the provider oci) and main.tf (which defines the resource).
Terraform configuration (.tf) files have specific requirements, depending on the components that are defined in the file. For example, you might have your Terraform provider defined in one file (provider.tf), your variables defined in another (variables.tf), your data sources defined in yet another.
The provider definition relies on variables so that the configuration file itself does not contain sensitive data. Including sensitive data creates a security risk when exchanging or sharing configuration files.
Variables in Terraform represent parameters for Terraform modules. In variable definitions, each block configures a single input variable, and each definition can take any or all of three optional arguments:
Type (Optional): Defines the variable type as one of three allowed values: string, list, and map. If this argument is not used, the variable type is inferred based on default. If no default is provided, the type is assumed to be string
Default (Optional) : Sets the default value for the variable. If no default value is provided, the caller must provide a value or Terraform throws an error.
Description (Optional) : A human-readable description of the variable.
Output variables provide a means to support Terraform end-user queries. This allows users to extract meaningful data from among the potentially massive amount of data associated with a complex infrastructure.
output "InstancePublicIPs" {
value = ["${oci_core_instance.TFInstance.*.public_ip}"]
}
Resource Configuration
Resources are components of your Oracle Cloud Infrastructure. These resources include everything from low-level components such as physical and virtual servers, to higher-level components such as email and database providers, your DNS record.
Many of you knows that i have been working on different cloud vendor, oracle cloud infrastructure , Amazon AWS, and MS Azure, and I had chance to work on many of them with hands-on experience and implement projects on all of them.
Now i am working on 2nd book that will include different topics about the 3 of them, DevOps, and comparison between all the three cloud vendor and more.
During the Lockdown, i was working to sharp my skills and test them in the cloud, therefore i decided to go for azure first and trust me when i say “it’s on of the hardest exam i ever did”.
The exam itself it’s totally different from what i used to, real case scenario that you should be aware of azure features, all of them, and configure them.
To be “Azure Solutions Architect Expert”, there are some of the conditions you should go thru, first you need to apply for two exams, AZ-301 & AZ-300
AZ-301 Microsoft Azure Architect Design
AZ-300 Microsoft Azure Architect Technologies
Both are Part of the requirements for: Microsoft Certified: Azure Solutions Architect Expert, the first exam which AZ-301, disccused the following secure, scalable, and reliable solutions. Candidates should have advanced experience and knowledge across various aspects of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data management, budgeting, and governance. This role requires managing how decisions in each area affects an overall solution. Candidates must be proficient in Azure administration, Azure development, and DevOps, and have expert-level skills in at least one of those domains.
Learning Objectives
Determine workload requirements
Design for identity and security
Design a data platform solution
Design a business continuity strategy
Design for deployment, migration, and integration
Design an infrastructure strategy
For the AZ-300
Learning Objectives
Deploy and configure Azure infrastructure
Implement workloads and security on Azure
Create and deploy apps on Azure
Implement Azure authentication and secure data
Develop for the cloud
After you completed the both exams successfully you will receive your badge for the three exams, durtation for the exams around 3 hours and trust me you will need it.