Kubernetes: Orchestrate Your Containers
Kubernetes
Kubernetes is an open-source Container Management tool that automates container deployment, container scaling, descaling, and container load balancing (also called a container orchestration tool).
Kubernetes can group ‘n’ number of containers into one logical unit for managing and deploying them easily. It works brilliantly with all cloud vendors i.e. public, hybrid, and on-premises.
Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more—making it easier to manage applications.
Kubernetes provides a robust platform for managing containerized applications at scale. Its benefits include improved scalability, high availability, resource efficiency, self-healing capabilities, portability, and support for implementing DevOps, Cloud, and DevSecOps practices.
Setting Up a Kubernetes Cluster
Step 1 - Cluster up and running
If you haven’t already, first install minikube. Check that it is properly installed, by running the minikube version command:
minikube version
Once Minikube is installed, start the cluster, by running the minikube start command:
minikube start
Great! You now have a runnning Kubernetes cluster in your terminal. minikube started a virtual environment for you, and a Kubernetes cluster is now running in that environment.
Step 2 - Cluster version
To interact with Kubernetes during this bootcamp we’ll use the command line interface, kubectl. We’ll explain kubectl in detail in the next modules, but for now, we’re just going to look at some cluster information. To check if kubectl is installed you can run the kubectl version command:
kubectl version
OK, kubectl is configured and we can see both the version of the client and as well as the server. The client version is the kubectl version; the server version is the Kubernetes version installed on the master. You can also see details about the build.
Step 3 - Cluster details
Let’s view the cluster details. We’ll do that by running kubectl cluster-info:
kubectl cluster-info
During this tutorial, we’ll be focusing on the command line for deploying and exploring our application. To view the nodes in the cluster, run the kubectl get nodes command:
kubectl get nodes
This command shows all nodes that can be used to host our applications. Now we have only one node, and we can see that its status it ready (it is ready to accept applications for deployment).
Basic Kubernetes Objects
Pods
A Kubernetes pod is a collection of one or more Linux® containers and is the smallest unit of a Kubernetes application.
Deployments
A Kubernetes Deployment tells Kubernetes how to create or modify instances of the pods that hold a containerized application.
Services
A Service is a method for exposing a network application that is running as one or more Pods in your cluster.
Create the Pod, Deployment, and Service
Use Kubectl apply to apply the above configurations and create the pods, deployment, and service.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Alternatively, you can also put both these configs in a single manifest.yaml file and run the apply command just once.
kubectl apply -f manifest.yaml
Make sure you have executed the minikube start command before applying any files, as you need to start a cluster to run your pods.
You can also check the pod logs with the kubectl logs command.
Minikube also provides a dashboard that shows all your pods, deployments, and services as a web UI. Run the minikube dashboard command and your browser will spring up the dashboard.
Access Your Application
To access your service in the Minikube cluster, run the following command to get the endpoint at which your LoadBalancer service is exposed:
minikube service <service-name> --url
Open the same endpoint in a browser or access it via Postman.
You can also use localhost instead of the IP address, as they are the same.
Scaling and Managing Applications
Your application is up and running. With increasing demand, you may need to scale up your application to ensure optimal performance.
To scale up your application, increase the number of replicas of your deployment by running the following command:
kubectl scale deployment <deployment-name> --replicas=<desired-number>
Specify the desired number of replicas and it will scale up your pods.
kubectl scale deployment sample-node-app --replicas=4
After running the above command, run kubectl get deployments to see that the number of replicas has gone up.
Alternatively, you can modify the Deployment YAML file by changing the number of replicas and running the kubectl apply command again.
If you no longer need a large number of pods and want to scale down your application, just run the same command and specify a smaller number of replicas.
kubectl scale deployment sample-node-app --replicas=2