Comment by notnmeyer
i assume when people are talking about k8s complexity, it’s either more complicated scenarios, or they’re not talking about managed k8s.
even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.
if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.
> or they’re not talking about managed k8s
Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.
If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.
Hence why I’m always eager to drive something like Canine!
(EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)