In anticipation of the KubeCon + CloudNativeCon conference that will take place in Valencia, Spain, on May 16-20 (and virtually), we wanted to share with you some key takeaways from six recent Kubernetes articles that we found particularly interesting.
1. The Top 5 Kubernetes Configuration Mistakes—and How to Avoid Them by Komodor
This article describes how to avoid five common syntax, provisioning, and resource management misconfigurations that can cause cluster-wide performance, availability, and stability issues. For example, poorly configured operators for facilitating third-party integrations can end up wantonly consuming limited resources, causing runtime errors such as OOM (out of memory). Or using a single container to handle all ingress traffic can take down the cluster if there are traffic spikes.
Our main takeaway is that these and other configuration mistakes must be taken into account during the design, development, and testing stages in order to avoid runtime performance issues.
2. The Ultimate Kubectl Commands Cheat Sheet by Komodor
This article is an invaluable resource on how to properly use the kubectl command line to interact optimally with Kubernetes clusters. The various kubectl options and filters are critical for getting or switching contexts, obtaining the names of containers in a running pod, creating or getting values from secrets, testing RBAC rules, and more.
Our main takeaway is that complete mastery of the kubectl command is an essential Kubernetes development skill. In addition to this article, be sure to reference the official kubectl page.
3. Kubernetes Capacity Planning: How to Rightsize the Requests of Your Cluster by Sysdig
Too much capacity is wasteful and needlessly costly. Too little capacity can cause performance bottlenecks. This article provides important insights on the art and science of rightsizing Kubernetes capacity. Our main takeaways are:
- Make sure to have Prometheus as an add-on for tracking cluster resource usage metrics.
- Use Kubernetes limits and requests whenever you can.
- Size your clusters based on the resources your pods are estimated to need and use.
- Utilize cloud-native autoscaling features if you’re deploying on public clouds.
Although not mentioned explicitly in the article, we would also add the importance of utilizing Kubernetes’ horizontal and vertical pod autoscaling features (HPA and VPA) to rightsize your clusters.
4. Kubernetes 1.24 – What’s New? by Sysdig
Kubernetes 1.24 was released on May 3. This article summarizes the most notable new, evolving, and deprecated features across a number of key categories: APIs, apps, auth, network, nodes, scheduling, and storage.
Our main takeaway is that, as a Kubernetes developer, it’s important that you stay on top of where the Kubernetes project is headed and what its timeline is moving forward. In addition to this article, two other helpful resources are:
5. Rancher vs. Kubernetes: It’s Not Either Or by Kubecost
Kubernetes and Rancher are both important open-source container management projects, each with a large community of users and contributors. This article starts by summarizing the key features of each project:
Cloud provider-agnostic (easy migration)
Easy cluster provisioning and import
Easy scaling (versus VM-hosted apps)
“Projects” for better grouping of namespaces
Configuration parameters to optimize resource usage
Extended RBAC control (per project, across clusters)
Self-healing in case of node failure
Easy workload deployment, without updating YAML files
Environment consistency (private cloud, public cloud, on-premises, hybrid, etc.)
Advanced monitoring and alerting, pushing cluster logs to different backends
Extensive Kubernetes app catalog
The main takeaway is that the two are complementary. Kubernetes focuses on orchestrating resources within a single cluster, while Rancher eases Kubernetes cluster management at scale. So, for example, using Rancher to deploy Kubecost across a Rancher project provides end-to-end visibility into and more granular management of Kubernetes cluster costs, as well as cluster health and efficiency.
We would also like to point out that Rancher is being embraced by cloud providers for managing cloud-native Kubernetes clusters. See AWS’ reference deployment Rancher for Amazon EKS.
6. Kubernetes kOps: Step-By-Step Example & Alternatives by Kubecost
Kubernetes kOps is an open-source command line tool for automating:
- Configuration, maintenance, and management of Kubernetes clusters
- Provisioning of the cloud infrastructure to run them
Although the article points out that there are alternatives to kOps (Kubespray, eksctl, and kubeadm), kOps is the only tool that is both provider-agnostic (or at least will be soon) and able to support infrastructure provisioning. It then goes on to provide a hands-on example of how to use kOps to set up a Kubernetes cluster in AWS.
Our main takeaway is that tools like kOps are an important part of an organization’s Kubernetes stack, making it easier to manage and orchestrate Kubernetes clusters at scale.
The Kubernetes ecosystem is continuously evolving, and we here at IOD make it our business to keep on top of emerging innovations, trends, and tips. In this article, we shared with you our key takeaways on how to: avoid common misconfigurations, fully leverage the kubectl command, rightsize Kubernetes capacity, and incorporate both kOps and Rancher into your Kubernetes stack. We also looked at what’s new (and what’s gone) in the latest version released earlier this month.
Tap into IOD’s extensive talent network of K8s, DevOps, cloud experts, and more to create content that speaks to devs. Get started today.