Kubernetes (K8s) has won the container orchestrator battle. It’s a victory so stupendous you don’t talk anymore about “container orchestrators.” Just Kubernetes.
Competitors even declared the triumph when Docker announced K8s availability on Swarm and in Docker for Windows, and Apache Mesos with its integration.
Searching for Kubernetes has increased steadily over the years, with more and more people trying to understand exactly what K8s is about, and better yet, how they can take advantage of this technology.
So, what is Kubernetes? Simply put, K8s is a container orchestrator platform capable of running different workloads in different environments with great flexibility. It allows an operator to set a minimum and a maximum number of machines for each application and how the app should upgrade. It allows easy deployment with zero downtime and quick recovery. But not only that. Kubernetes is one of the most popular open-source projects on GitHub and is full of hidden surprises you may not know. Let’s uncover some of them.
1. K8 Can Be Used on a Single Machine for Development Purposes
I know, Kubernetes is extremely complex and most people are afraid to start with it. It has more than 2000 commits and one new version every three months. Moreover, it’s running in several big companies around the world.
Despite all this, it’s pretty easy to start with Kubernetes on your local machine with a tool called minikube. The tool is a simplified version of Kubernetes designed for developers and it comes with a basic implementation of a Load Balancer, Persistent Volumes, and even support for NVIDIA GPU!
Minikube supports Windows 8 and up, and can run with VirtualBox or Hyper-v. For an even more straightforward installation process, use Docker for Windows installer. For Linux and macOS, there are also easy installation packages available.
2. You Can Run Serverless on Kubernetes and Forget About Vendor Lock-in
Is it possible to keep development as quick as possible without worrying about vendor lock-in? You aren’t necessarily doomed if you use managed services. Are you willing to take control of your databases and message broker? Remember to have a good backup and upgrade policy to avoid possible problems in the future. But, you aren’t bound to vendor lock-in.
Kubernetes can be seen as a Cloud Operating System. It allocates cloud resources in the same structured manner no matter which cloud provider you use. Need a VM? Kubernetes will create one if your cluster needs more resources. Need to horizontally scale an application? K8S will deeply integrate into your cloud provider and create a load balancer for this service. In other words: you can migrate all the ease of provided by your cloud provider to a uniform platform that works with many other cloud providers.
Managed services like databases and message brokers are an attempt by cloud providers to make it easy for you to use one of their exclusive, proprietary services that’s incompatible with other providers. Another trending example is Serverless: every major provider offers its solution (AWS, GCloud, Azure) with a different technology stack and paradigm. Once you choose one, it may be hard to change to another without changing much of your code. That said, there are open source solutions to deploy the serverless concept on Kubernetes such as Kubeless. Kubeless uses several K8s features to provide Serverless in an open source environment: autoscale, monitoring, and debugging capabilities. It also runs in many different technology stacks. However, native serverless solutions are still more well integrated to other vendor’s cloud services, which can make it easier to develop.
3. You Can Set Up Hybrid Environments on K8s
A hybrid environment is when computing resources are divided into cloud and colocation resources sharing the same cluster. Generally, this offers two significant benefits: (1) you will have more resources for peak workloads and, (2) it eases cloud migration by using the company’s current computing resources.
Currently in alpha, Kubernetes Cluster Federation (KubeFed) is the K8S solution for this kind of environment. It allows you to have a federation of Kubernetes working together. Each Kubernetes is deployed in a location and communicates with a central location that coordinates all of them. The users then have a centralized API.
Just because it’s available doesn’t mean it’s easy: first, it requires a Kubernetes instance on your on-premises data center. Then, you need to configure Kubefed in both clusters. Keeping a K8s instance is not trivial and should be maintained by qualified professionals to avoid possible production outages. There are dozens of services running that require advanced networking and Unix systems skills.
Kubefed also can be used to deploy a multi-cloud environment. This could make your life a little bit easier as you do not need to maintain a K8s instance. However, keep in mind possible bandwidth traffic costs depending on your infrastructure topology.
4. You Can Set Up Persistent Volumes for Stateful Applications
We can classify most applications into one of two categories: stateless and stateful. Stateless applications can be shut down anytime without losing sensitive information. Web applications and optimization algorithms are examples. Stateful applications, on the other hand, require persistent storage to store data, such as databases and message brokers.
Stateless applications are easy to horizontally scale (increase the number of instances of the same software) as you can turn them on and off without worrying about information stored. Stateful ones, however, need special care to avoid data loss. Horizontally scaled software that relies on storage is specific to the software: some databases support this with its most expensive options (Microsoft SQL Server, Oracle Database). Open source software may have this option with configuration and hard work (PostgreSQL, Redis).
Persistent Volumes are the K8s solution for handling persistence in applications. There are two types of volumes: static or dynamic. Static volumes are created before the application and are harder to maintain as the operator must know the application needs before creating it. On Dynamic volumes, the cluster allocates the resources when needed. Although dynamic volumes can be used in most cases, a static volume can be used when an application has specific IO needs like a relational database.
Persistent volumes use the physical, existent, infrastructure; thus, it depends on providers available. Each cloud provider offers at least one solution for persistence, but there are also third-party solutions.
5. You Can Work With Windows Node for Windows Containers
Introduced in Windows Server 2016, Windows containers are now also available for development with Windows 10 (since version 1607). They are like Linux Containers, but able to run Windows applications, even legacy software. They can work like Linux using namespaces and Kernel sharing, or on a more secure workload relying on Hyper-v (isolated Kernel).
Kubernetes has provided support for Windows containers since Kubernetes 1.14 launched last March. You don’t need to configure anything save for creating a new node pool (with Windows machines). There are some caveats, however: Hyper-v isolation is not available, there are no privileged containers, you can’t mount files (only Volumes), and others.
Currently, Azure, AWS, and Google clouds offer this option in preview.
Kubernetes is vast and still has many secrets. As an open source project, anyone can add new– and sometimes, obscure– features. There are many other elements of K8s that may help you in your particular quest to embrace the container orchestrator. Just take some time, install it on your machine, and play around.