You need to develop and deploy a python app that writes a new file to S3 on every execution. These files need to be maintained only for 24h.
The content of the file is not important, but add the date and time as prefix for you files name.
The name of the buckets should be the following ones for QA and Staging respectively:
qa-FIRSTNAME-LASTNAME-platform-challenge
staging-FIRSTNAME-LASTNAME-platform-challenge
The app will be running as a docker container in a Kubernetes cluster every 5 minutes. There is a Namespace for QA and a different Namespace for Staging in the cluster. You don’t need to provide tests but you need to be sure the app will work.
Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands. Because it’s a huge shift in the way the server-side applications are defined, stored and managed.
Helm Charts provide “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Apps deployed from Helm Charts can then be leveraged together to meet a business need, such as CI/CD or blogging platforms.
Install Helm
Use curl to create a local copy of the Helm install script
Copy the commands listed under the NOTES section of the output, and then paste and run them. It should return the private IP address and port number of our application.
Flannel is an open-source virtual network project managed by CoreOS network designed for Kubernetes. Each host in a flannel cluster runs an agent called flanneld . It assigns each host a subnet, which acts as the IP address pool for containers running on the host.
Moving to Docker container series blog post, I choose to continue with Kubernetes and discuss it more start with configuration and installation.
This configuration discuss on-premise side and to do that you have at least 2 servers
Server
purpose
description
The Master
node which controls and manages a set of worker nodes (workloads runtime) and resembles a cluster in Kubernetes. A master node has the following components to help manage worker nodes: … Kube-Controller-Manager, which runs a set of controllers for the running cluster.
The worker node
A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. … Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.
Configure The Kubernetes cluster
On all nodes, add the Kubernetes repo to /etc/yum.repos.d:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
Copy the kubeadm join command, then paste and run it in your worker nodes terminal windows.
From the worker nodes, verify that they can see the cluster
docker ps -a
From Node 1 (Master), check the status of the nodes
kubectl get nodes
Now, Kubernetes installed but it’s empty to have pods or services the next will be for you, it can be change depends on your application type but it’s Just for testing to show the reader how it’s goes.
Once you access you have to do the following to connect Grafana with Prometheus
Adding DataSource
In the Grafana Home Dashboard, click the Add data source icon. For Name, type “Prometheus”. Click into the Type field, and select Prometheusfrom the dropdown. Under URL, select http://localhost:9090. (But we’re going to change this in a moment.) copy the private IP address of your server. Then, replace “localhost” in the URL with the private IP address. (It should look like this: http://PRIVATE_IP_ADDRESS:9090).
Add the Docker Dashboard to Grafana
lick the plus sign (+) on the left side of the Grafana interface, and click Import. Then, Open the JSON file Uploaded to my GitHub here. Copy the contents of the file to your clipboard.
We now have our Grafana visualization. In the upper right corner, click on Refresh every 5m and select Last 5 minutes.
Using Prometheus, you can monitor application metrics like throughput (TPS) and response times of the Kafka load generator (Kafka producer), Kafka consumer, and Cassandra client. Node exporter can be used for monitoring of host hardware and kernel metrics.
Create a prometheus.yml File
In root’s home directory, create prometheus.yml
vi prometheus.yml
We’ve got to stick a few configuration lines in here. When we’re done, it should look like this
In order to stand up the environment, we’ll run this
docker-compose up -d
And to see if everything stood up properly, let’s run a quick docker ps. The output should show four containers: prometheus, cadvisor, nginx, and redis.
Let’s so see in a web browser as well. and browse to it, using the correct port number: http://<IP_ADDRESS>:9090/graph/
investigating CAdvisor
In a browser, navigate to http:// <IP_ADDRESS> :8080/containers/. Take a peek around, then change the URL to one of our container names (like nginx) so we’re at http://:8080/docker/nginx/.
If we run docker stats, we’re going to get some output that looks a lot like docker ps, but this stays open and reports what’s going on as far as the various aspects (CPU and memory usage, etc.) of our containers.
Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless you configure it to use a different logging driver, or “log-driver” for short.
STEPS :-
Configure Docker to user Syslog
vim /etc/rsyslog.conf
In the file editor, uncomment the two lines under `Provides UDP syslog reception` by removing `#`.
#ModLoad imudp
#UDPServerRun 514
Then
systemctl start rsyslog
Now that syslog is running, let’s configure Docker to use syslog as the default logging driver. We’ll do this by creating a file called daemon.json
I am posting about docker a lot recently because these blog post been in my Drafts for long time waiting someone to finish them 😅 and Finally I had chance to do that.
I posted recently about storing container data in AWS s3 , but this time the same topic but for Google Cloud and how to configure that.