Okay, I love to post free examples/projects on my github from while to while, i choose docker and kubernetes this time, the project idea it’s very nice and easy to implement.
What this project do ?
This can be a simple web app that reads a ‘hello world’ string from the MySQL database.Run a database app. Data volume should be persistent.Application from step 1 needs to discover the database from step 2 using Kubernetes native features. Database credentials should NOT be hardcoded in application or helm chart code.The application should be accessible from the outside of Kubernetes.Create a helm chart which implements all these steps
Create an application that connects to a database, reads some data, and returns this data upon HTTP request, This can be a simple web app that reads a ‘hello world’ string from the MySQL database.
Run a database app. Data volume should be persistent.
Application from step 1 needs to discover the database from step 2 using Kubernetes native features, Database credentials should NOT be hardcoded in application or helm chart code.
The application should be accessible from the outside of Kubernetes.
Create a helm chart which implements all these steps
I Choose to use Java as programing language because the springboot frameworkit’s already defined and easy to use.
Please follow the readme file and everything will working fine without any issue, if you have any question comment below and i will answer
As an important step in agile development, continuous integration is designed to maintain high quality while accelerating product iteration. Every time when the codes are updated, an automatic test is performed to test the codes and function validity. The codes can only be delivered and deployed after they pass the automatic test, This post describes how to combine Jenkins, one of the most popular integration tools, with Alibaba Cloud Container Service to realize automatic test and image building pushing.
Deploying Jenkins Applications and the Slave Nodes
1. Create a Jenkins orchestration template.
Create a new template and create the orchestration based on the following content.
2. Use the template to create Jenkins applications and slave nodes.
You can also directly use a Jenkins sample template provided by Alibaba Cloud Container Service to create Jenkins applications and slave nodes.
3. After the successful creation, Jenkins applications and slave nodes will be displayed in the service list.
4. After opening the access endpoint provided by the Container Service, you can use the Jenkins application deployed just now.
Realizing Automatic Test and Automatic Build and Push of Image
Configure the slave container as the slave node of the Jenkins application.
Open the Jenkins application and enter the System Settings interface. Select Manage Node > Create Node, and configure corresponding parameters. See the figure below.
Note: Label is the only identifier of the slave. The slave container and Jenkins container run on the Alibaba Cloud platform at the same time. Therefore, you can fill in a container node IP address that is inaccessible to the Internet to isolate the test environment.
Use the jenkins account and password (the initial password is jenkins) in Dockerfile for the creation of the slave-nodejs image when adding Credential. Image Dockerfile address HERE
1. Create a project to implement the automatic test.
Create an item and choose to build a software project of free style.
Enter the project name and select a node for running the project. In this example, enter the slave-nodejs-ut node created above.
Configure the source code management and code branch. In this example, use GitHub to manage source codes.
Configure the trigger for building. In this example, automatically trigger project execution by combining GitHub Webhooks and services.
Add the Jenkins service hook to GitHub to implement automatic triggering.
Click the Settings tab on the Github project homepage, and click Webhooks & services > Add service and select Jenkins (Git plugin). Enter ${Jenkins IP}/github-webhook/ in the Jenkins hook URL dialog box.
1. Select the application nodejs-demo just created, and create the trigger.
Add a line to the shell scripts you wrote in Realize automatic test and automatic build and push of image. The address is the trigger link given by the trigger created above.
Change the Command in the example from Realize automatic test and automatic build and push of image as follows.
i. cd chapter2
ii. docker build -t registry.aliyuncs.com/qinyujia-test/nodejs-demo .
iii. docker login -u ${yourAccount} -p ${yourPassword} registry.aliyuncs.com iv.docker push registry.aliyuncs.com/qinyujia-test/nodejs-demo
v. curl 'https://cs.console.aliyun.com/hook/trigger?triggerUrl=***==&secret=***'
After pushing the image, Jenkins automatically triggers redeployment of the nodejs-demo application.
Configure The Email Notification for the Results
If you want to send the unit test or image configuration results to relevant developers or project execution initiators through email, perform the following configurations.
On the Jenkins homepage, click System Management > System Settings, and configure a Jenkins system administrator email.
Install the Extended Email Notification plugin, configure SMTP server and other relevant information, and set the default recipient list. See the figure below.
The above example shows the parameter settings of the Jenkins application system. The following example shows the relevant configurations for Jenkins projects whose results are to be pushed through email.
1. Add post-building operation steps in the Jenkins project, select Editable Email Notification, and enter a recipient list.
Regarding to Wikipedia, Serverless computing is a cloud computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity
Today i will show you an example how to create serverless website but this time not using Amazon AWS, Azure or OCI but Alibaba Cloud Provider.
Create a Function Compute Service
Go to the console page and click through to Function Compute.
Click the add button beside Services.
In the Service slide out, give your service a name, an optional description, and then slide open the Advanced Settings.
In Advanced Settings you can grant access for Functions to the Internet, to VPC resources, and you can attach storage and a log service to a Function. You can also configure roles.
For our tutorial, we will need Internet access so make sure this configuration is on.
We will leave VPC and Log Configs as they are.
In the Role Config section, select Create New Role, and in the dropdown list pick AliyunOSSReadOnlyAccess as we will be accessing our static webpages from an Object Storage Service bucket.
Click Authorize.
You will see a summary of the Role you created.
Click Confirm Authorization Policy.
You have successfully added the Role to the Service.
Click OK.
ou will see the details of the Function Compute Service you just created.
Now let’s create a Function in the Service. Click the add button next to Functions.
You will see the Create Function process. The first part of the process is Function Template.
There are many Function Templates available, including an empty Function for writing your own bespoke Functions.
Alibaba Cloud-supplied Template Functions are very useful as they have relevant method invocation and demo code for getting started quickly with Function Compute.
let’s choose the flask-web Function written in Python2.7.
Click Select.
We are now at the Configure Triggers section of creating a Function.
Select HTTP Trigger from the dropdown list. Give the Trigger a name and choose Authorization details (anonymous does not require authorization).
Choose your HTTP methods and click Next. We are going to build a simple web-form application so we will need both the GET and POST HTTP methods.
Now we arrive at the Configure Function Settings.
Give the Function a name then scroll down to Code details.
We’ll leave the supplied code for now. Scroll down to below the code sample.
You will see Environment Variable input options and Runtime Environment details.
Click Next.
Click Next at Configure Function Permissions.
Verify the Configuration details and click Create.
You will arrive at the Function’s IDE. Here you can enter new code, edit the code directly, upload code folders, run, test, and fix your code.
Scroll down.
Copy the URL as we will need to add this to our static webpages so they can connect to our Function Compute Service and Function.
Set Up and Configure an OSS Bucket
Click through to Object Storage Service on the Products page.
If you haven’t yet activated Object Storage Service, go ahead and activate it. In the OSS console, click Create Bucket.
Choose a name for the OSS Bucket and pick the region – you cannot change the region later. Select the Storage Class – you also cannot change this later.
We have selected Public Read for the Access Control List.
When you’re ready, click OK.
You will see the Overview page for your bucket. Make a note of the public Internet URL.
In the Files tab, upload your static web files.
I uploaded a simple index.html homepage and a background picture.
In Basic Settings, click Configure to configure your Static Pages.
Add the homepage details and click Save.
Now go to a new browser window and access the OSS URL you saved earlier.
Back in the Function Compute console, you can now test the flask-app paths directly from the code.
We already tested index.html with no Path variable. Next, we test the app route signin with GET and check the Headers and status code.
The signin page code is working correctly. You can also check the Body to make sure the correct HTML will render on the page. Notice that because I entered the path variable, signin is appended to the URL.
Of course, any errors you encounter will show up in the Logs section for easy debugging.
Now, let’s test this page on the Internet.
If you get an error here, implement a soft link for the page in OSS. Go to the OSS bucket and click More dropdown for the HTML file in question and choose Set soft link.
Give the link a name and click OK.
A link file will appear in the list of static files and you will now be able to access the page online with the relevant soft link and it will render as above.
Back in Function Compute, we can test the POST method in the console with the correct username and password details in the same way.
Add the POST variables to the form upload section in the Body tab.
If you ever worked with cloud and configured different subnet, there will be public and private subnet, both having a different number of servers, for the public or even private have you also wonder how to access the environment without associate the VM to Public IP, in this post I will show you how.
For the figure shows one of the simple example of that, In this post you will learn how to connect to an instance that is hosted in a private subnet
This blog post is one of that kind that took much time and consume so much energy, to complete this post it took me around ten days to make sure that I will cover most of the available services and make it readable for people, Be sure the services can change while you are reading this post ; if you have any comments,or add something to this post, please send me an email – using contact us page or by comments below.
I am writing this post to share a different cloud providers services and the comparison between each one of them, this will show various naming services for each one of them.
Earlier we used to store our data to H.D.D or USB flash, Cloud Computing services have replaced such hard drive technology. Cloud Computing service is nothing but providing services like Storage, Databases, Servers, networking, and software through the Internet.
Cloud Computing is moving so fast, in 2020 the cloud now is more mature, going multi-cloud, and likely to become more focused on vertical and a sales ground war as the leading vendors battle for market share.
Notes :
GCP : Google Cloud Provider
OCI :- oracle cloud infrastructure
None : not meaning the services is not available necessarily by cloud provider but i didn’t look deeper into this or i didn’t use it before.
Just quick post to show and share what services for each cloud provider, be notice that the services can be change while we are talking now, and this is not a complete list of services but it’s only shows the basic one.
A load balancer distributes traffic evenly among each system in a pool. A load balancer can help you achieve both high availability and resiliency.
Say you start by adding additional VMs, each configured identically, to each tier. The idea is to have additional systems ready, in case one goes down, or is serving too many users at the same time.
Azure Load Balancer is a load balancer service that Microsoft provides that helps take care of the maintenance for you. Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) applications. You can use Load Balancer with incoming internet traffic, internal traffic across Azure services, port forwarding for specific traffic, or outbound connectivity for VMs in your virtual network.
When you manually configure typical load balancer software on a virtual machine, there’s a downside: you now have an additional system that you need to maintain. If your load balancer goes down or needs routine maintenance, you’re back to your original problem.
Azure Application Gateway
If all your traffic is HTTP, a potentially better option is to use Azure Application Gateway. Application Gateway is a load balancer designed for web applications. It uses Azure Load Balancer at the transport level (TCP) and applies sophisticated URL-based routing rules to support several advanced scenarios.
Benefits
Cookie affinity. Useful when you want to keep a user session on the same backend server.
SSL termination. Application Gateway can manage your SSL certificates and pass unencrypted traffic to the backend servers to avoid encryption/decryption overhead. It also supports full end-to-end encryption for applications that require that.
Web application firewall. Application gateway supports a sophisticated firewall (WAF) with detailed monitoring and logging to detect malicious attacks against your network infrastructure.
URL rule-based routes. Application Gateway allows you to route traffic based on URL patterns, source IP address and port to destination IP address and port. This is helpful when setting up a content delivery network.
Rewrite HTTP headers. You can add or remove information from the inbound and outbound HTTP headers of each request to enable important security scenarios, or scrub sensitive information such as server names.
What is a Content Delivery Network (CDN)?
A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. It is a way to get content to users in their local region to minimize latency. CDN can be hosted in Azure or any other location. You can cache content at strategically placed physical nodes across the world and provide better performance to end users. Typical usage scenarios include web applications containing multimedia content, a product launch event in a particular region, or any event where you expect a high-bandwidth requirement in a region.
DNS
DNS, or Domain Name System, is a way to map user-friendly names to their IP addresses. You can think of DNS as the phonebook of the internet.
How can you make your site, which is located in the United States, load faster for users located in Europe or Asia?
network latency in azure
Latency refers to the time it takes for data to travel over the network. Latency is typically measured in milliseconds.
Compare latency to bandwidth. Bandwidth refers to the amount of data that can fit on the connection. Latency refers to the time it takes for that data to reach its destination.
One way to reduce latency is to provide exact copies of your service in more than one region, or Use Traffic Manager to route users to the closest endpoint, One answer is Azure Traffic Manager. Traffic Manager uses the DNS server that’s closest to the user to direct user traffic to a globally distributed endpoint, Traffic Manager doesn’t see the traffic that’s passed between the client and server. Rather, it directs the client web browser to a preferred endpoint. Traffic Manager can route traffic in a few different ways, such as to the endpoint with the lowest latency.
This post provide steps for downloading and installing both Terraform and the Oracle Cloud Infrastructure Terraform provider.
Terraform Overview
Terraform is “infrastructure-as-code” software that allows you to define your infrastructure resources in files that you can persist, version, and share. These files describe the steps required to provision your infrastructure and maintain its desired state; it then executes these steps and builds out the described infrastructure.
Infrastructure as Code is becoming very popular. It allows you to describe a complete blueprint of a datacentre using a high-level configuration syntax, that can be versioned and script-automated, Terraform can seamlessly work with major cloud vendors, including Oracle, AWS, MS Azure, Google, etc
Download and Install Terraform
In this section, i will show and explain how to download and install Terraform on your laptop/PC Host Operating System, you can download using the below link :-
After you download the terraform, Unzip the Terraform to whatever location you want to run it from. Then, add that location to your OS PATH.
Windows : By adding to Path –> environment variables
Linux : Profile –> export Path
You can check by run the CMD and check the version:-
Check Terraform commands
Download the OCI Terraform Provider
Prerequisites:-
OCI User credentials that has sufficient permission to execute a Terraform plan.
Required keys and Oracle Cloud Infrastructure IDs (OCIDs).
The correct Terraform binary file for your operating system
Installing and Configuring the Terraform Provider
In my personal opioion about this section (The title of the section same as Oracle Documentation) I found it wrong, i worked with Terraform in different cloud vendor, AWS, Azure and OCI so Terraform will recognize it and automatically install the provider for you.
to do that, all of you have to do is create folder , then create file “variables.tf” that only contains
provider "oci" {<br>}
and run terraform command
terraform init
Now Let’s Talk small examples about OCI and Terraform, First you have to read “Creating Module” to understand the rest of this post here.
I will upload to my Github here Small Sample for OCI Terraform to allow you underatand how we can use it instead of the GUI and make it easy for you.
I upload to my github example of Terraform for OCI Proiver, In the this example i will create autonomous database but not using the GUI,
to work with Terraform, you have to understand what is the OCI Provider and the parameters of it.
The Terraform configuration resides in two files: variables.tf (which defines the provider oci) and main.tf (which defines the resource).
Terraform configuration (.tf) files have specific requirements, depending on the components that are defined in the file. For example, you might have your Terraform provider defined in one file (provider.tf), your variables defined in another (variables.tf), your data sources defined in yet another.
The provider definition relies on variables so that the configuration file itself does not contain sensitive data. Including sensitive data creates a security risk when exchanging or sharing configuration files.
Variables in Terraform represent parameters for Terraform modules. In variable definitions, each block configures a single input variable, and each definition can take any or all of three optional arguments:
Type (Optional): Defines the variable type as one of three allowed values: string, list, and map. If this argument is not used, the variable type is inferred based on default. If no default is provided, the type is assumed to be string
Default (Optional) : Sets the default value for the variable. If no default value is provided, the caller must provide a value or Terraform throws an error.
Description (Optional) : A human-readable description of the variable.
Output variables provide a means to support Terraform end-user queries. This allows users to extract meaningful data from among the potentially massive amount of data associated with a complex infrastructure.
output "InstancePublicIPs" {
value = ["${oci_core_instance.TFInstance.*.public_ip}"]
}
Resource Configuration
Resources are components of your Oracle Cloud Infrastructure. These resources include everything from low-level components such as physical and virtual servers, to higher-level components such as email and database providers, your DNS record.
To create your first server/VM on Azure cloud, you have different ways to do that :-
Azure Resource Manager
Azure PowerShell
Azure CLI
Azure REST API
Azure Client SDK
Azure VM Extensions
Azure Automation Services
The Azure portal is the easiest way to create resources such as VMs, i will describe each one of them,
The first way which is The Portal here, to do this it’s very simple :-
Click on the Create a resource option in the top-left corner of the portal page.
Use the Search the Marketplace search bar to find “Ubuntu Server” for example.
Press on Create , then new page will be open.
Configure the VM, by enter the name, the region, The Subscription,Availability options
There are several other tabs you can explore to see the settings you can influence during the VM creation. Once you’re finished exploring, click Review + create to review and validate the settings.
On the review screen, Azure will validate your settings. You might need to supply some additional information based on the requirements of the image creator.
This is was the first way to create the VM which is consider the easiet one also.
Azure Resource Manager
assumig you want to create a copy of a VM with the same settings. You could create a VM image, upload it to Azure, and reference it as the basis for your new VM,Azure provides you with the option to create a template from which to create an exact copy of a VM.
You can do this, after create the VM –> Setting –> export template.
Azure PowerShell
Azure PowerShell is ideal for one-off interactive tasks and/or the automation of repeated tasks, note that PowerShell is a cross-platform shell that provides services like the shell window and command parsing.
The Azure CLI is Microsoft’s cross-platform command-line tool for managing Azure resources such as virtual machines and disks from the command line. It’s available for macOS, Linux, and Windows, this is also found in Different cloud vendor for example For Amazon it’s called aws cli, for Oracle it’s Called OCI-CLI and Google it’s called GCP-CLI.
az vm create --resource-group TestResourceGroup --name test-wp1-eus-vm --image win2016datacenter --admin-username osama --admin-password anything
Programmatic (APIs)
This is no my expertise so i will no go deep dive with it, But we were talking about Azure CLI and powershell, you can install something called Azure REST API and start using differen programing language to deal with Azure, i did this with python for AWS using Boto3 module, i post about it before here.
The same can be done for Azure or any Cloud vendor.
Azure VM Extensions
Azure VM extensions are small applications that allow you to configure and automate tasks on Azure VMs after initial deployment. Azure VM extensions can be run with the Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal.