You Don't Need Kubernetes

You Don't Need Kubernetes
Photo by Growtika / Unsplash

I've been sitting on my homelab for over a year, constantly adding to it and expanding its scope. What started as a simple Raspberry Pi cluster for learning purposes during COVID expanded into a mix of x86 NUCs and RPi 4s with 8GB of RAM. The NAS morphed into an iSCSI target, then an NFS share, then was considered for MinIO or Ceph duty before falling back on the elegant simplicity of NFS. The NUCs, the Pis, they sat dormant as goalposts moved, perceived needs changed, and perfect became the enemy of good.


I recently had an interview for a Principal Engineer role with a small investment firm. $200k salary, hybrid work schedule, a small team to manage, outcomes to own. It was everything I wanted in my next role, all the career growth I could ask for with a technology estate that was suitable to the company's needs, if not necessarily cutting edge. I was incredibly okay with this, and made it clear this was exactly what I was looking for in my next opportunity.

I'm pretty sure I did not get the role.


Kubernetes was the guilty party to both of these recent scenarios. My insistence upon remaining on the cutting edge of infrastructure engineering hindered my homelab deployment and likely cost me the role I'd wanted. In the case of the former, its release cadence, constant partner evolution, and lack of a firm spec meant I was always playing catch-up in my homelab trying to find the right combination of OS, control plane, taints, ingress controllers, network overlays, and storage mediums to fucking work consistently. With regards to the job opportunity, it created an appearance that what I really wanted was to work on the cutting edge, and that anything less would be "boring"; this is despite the fact Kubernetes is mentioned exactly one time on my resume, and in the context of a Proof-of-Concept project for the private cloud at that.

Funny enough, it was the job interview that opened my eyes to what I'd lost by pursuing Kubernetes so voraciously in my personal time and professional career. Opportunities for personal growth, for enjoying the fruits of my labor, for experimentation, all lost because I was trying to untangle Cilium, or figure out why a pod that worked yesterday went TITSUP today with no other change. Weeks spent figuring out microk8s, and Talos, and k3s, and bare-metal K8s. Months studying for CKA and CKS, of reading Xe's (amazing) blog for homelab ideas and creative workarounds, of learning from their mistakes and successes both.

I wasn't enjoying the work anymore, and I didn't feel as if I was learning anything valuable beyond the reality that Kubernetes isn't meant for two NUCs, a smattering of Pis lacking RTCs, and a NAS that can't do PXE boots. My takeaway from two years digging into the guts of this thing and trying to make it work at home in a way that let me explore, and experiment, and learn, was that I didn't actually need it at all to do what I wanted.

I learned that Kubernetes is an amazing tool. It's also one I didn't need at home.

Going Fishing with Nuclear Weapons

Kubernetes is the prime example of an "enterprise-first" technology. If you're running anything more substantial than Minikube or kind, you're looking at an investment of three physical boxes just for HA Control Planes, plus another set of boxes for worker nodes.

But you can just do a single Control Plane and a Worker Node! Heck, you can taint the workload to run on Control Plane nodes as well!

Yeah, you could. Except then you've thrown out everything that makes Kubernetes, well, Kubernetes. You've created an overly complicated Docker node with manual failover capabilities. You've tripled the amount of YAML you have to write to execute the same thing as a Docker Compose file, and you have the "benefit" of now also wrangling things - namely High Availability - you can't have with a single control plane node. Plus, now you have etcd to backup and maintain (with its own controls that aren't tied to kubectl for some - likely asinine - reason).

In the real world of Kubernetes, however, you need three Control Plane nodes for HA. You also need some sort of HA etcd deployment or an external database with its own backup and maintenance schedule. Then you have to define the network overlay to use, which is its own ball of wax, and then you have to define the storage layers for the nodes to use so that pods can spin up and access storage regardless of the worker node they run on. Only after that's done can you dig into the Gateway API, which replaced Ingress controllers for L4 and L7 routing, and setup the necessary infrastructure to even begin serving workloads and services.

It's a lot of fucking work! It has to be a lot of fucking work, because Kubernetes is built for gargantuan deployments. We're talking hundreds, thousands, millions of containers covering global datacenter infrastructure whose node counts are measured in the double or triple digits. It's the king of abstraction layers, letting users consume (almost) every single layer of the OSI model as code.

Again: Kubernetes is an amazing tool. I am not dissing the amazingly complex hoops this stuff has to jump through just to function properly, nor the incredibly talented folks who build and maintain these massive orchestration systems. I continue to aspire to join their ranks, because it absolutely is a key part of infrastructure going forward.

It's just the equivalent of going fishing with a thermonuclear device: total overkill for my needs.

My Needs, Your Needs

Before I was ready to completely relinquish my grip on the idea of Kubernetes at home, I decided to approach this like the professional infrastructure engineer (and aspiring architect) I am and did a personal case study. What did I want to accomplish, what tools were at my disposal, what were the pros and cons of each approach, and which would serve my interests the best over the long haul while minimizing further investment relative to returns? The key takeaways for my goals were:

  • Defining infrastructure as code, rather than bespoke builds
  • Creating infrastructure that's resilient, such that any outage or failure of the underlying hardware or operating systems doesn't tank the entire footprint with it or require long recovery times
  • Creating IaC that's portable, so I can deploy it in private clouds like my homelab or public clouds like AWS/Azure/GCP/Vultr/Hetzner/etc
  • Rely on open source components where possible
  • Minimize time supporting the underlying infrastructure, so I can spend more time experimenting, writing code, or learning new skills

Once I had those down, I realized that I just wanted Docker. Well, not Docker per se, but a daemon-based container runtime that will let my workloads come up and down via a defined code spec (Compose, in this case), atop an Operating System I can extend to VMs if need be. This will be its own post at a later date (spoiler alert: I've cranked out two years of work in two weeks since I gave up my quest for at-h0me K8s, no AI required), but treating my own life like a professional case study worked brilliantly well at removing distractions and focusing my intent.

Which is also why I've generally recommended against Kubernetes in my professional career, I realized. It's not because it's a bad tool, but because organizations often want it implemented as a shiny, rather than a toolset. Not a single developer or product team has ever requested any of my IT teams over the years to build a Kubernetes cluster for them, because they can build them on their own atop standard VMs or with public cloud control planes. Kubernetes is already there, it's just that nobody wants to really use it enough to warrant the immense support requirements of its implementation.

And who can blame them? For corporate IT needs, Kubernetes is very much the same sort of overkill as it was in my homelab. The ERP servers run in precariously balanced VMs on their own dedicated cluster, blessed are they in the eyes of stakeholders; they need not Kubernetes. The directory controllers similarly lack any benefit from a Kubernetes deployment, as do print servers, application servers, or even the odd containerized deployment. The HA offered by the hypervisor management layer is enough for private clouds, and public clouds are...their own ball of wax, yeah, but still cheaper to deal with compute autoscaling groups than a full-fat K8s control plane. Hell, I've watched a telemetry engineer demonstrate a Grafana/Prometheus/OpenTelemetry metrics system running on ECS for $500 a month that managed over nearly half a million endpoints, and read his case study on how moving them to Kubernetes would multiply that bill a hundred-fold for absolutely no technical benefit whatsoever.

Kubernetes is an amazing tool. Most folks will never need it.

And that's okay!

Learning to Let Go

Since I gave up my K8s quest, I've found my passion for infrastructure engineering reignited. Diving into the Compose spec has let me stand up ~80% of my service backlog in a matter of weeks, and I'm happier for it. If things go pear-shaped, I can re-run the compose files on another OCI-compliant daemon and be up and running in minutes - more than enough time when I'm not serving anything public or mission-critical from my home.

That's the reality of being an engineer that I think gets over-looked by folks constantly chasing the new shiny thing, who live release-to-release like an adrenaline junkie looking for their next fix - or the organizations demanding that attitude in their engineers. Technology was never about constantly being on the cutting edge, it was about practical and efficient solutions to real problems. It means that for 99% of organizations out there, VMs are enough. Kubernetes - amazingly powerful, incredibly portable, infinitely customizable - was never meant for those entities, at least not in its current form. It's still growing, evolving, changing in ways that make it impractical for use in a lot of use cases.

If you've got to serve an app at scale, and have the support in funding and staff to make that happen? Kubernetes is the best tool at your disposal.

For everyone else, VMs and Containers are fine.