Comment by stego-tech

Comment by stego-tech a day ago

19 replies

I dig the concept! K8s is an amazing technology hampered by overwhelming complexity (flashback vibes to the early days of x86 virtualization), and thumbing through your literature it seems you’ve got a good grasp of the fundamentals everyone needs in order to leverage K8s in more scenarios - especially areas where PVE, Microcloud, or Cockpit might end up being more popular within (namely self-hosting).

I’ve got a spare N100 NUC at home that’s languishing with an unfinished Microcloud install; thinking of yanking that off and giving Canine a try instead!

czhu12 21 hours ago

The part I found to be a little unwieldy at times was helm. It becomes a little unpredictable when you apply updates to the values.yaml file, which ones will apply, and which ones need to be set on start up. Also, some helm installations deploy a massive number of services, and it's confusing which ones are safe to restart when.

But, I've always found core kubernetes to be a delight to work with, especially for stateless jobs.

  • jitl 7 hours ago

    Helm is annoying. I’m thankful it makes software easier to install but it’s like being thankful for npm.

cyberpunk 21 hours ago

i really don’t know where this complexity thing comes from anymore. maybe back in the day where a k8s cluster was a 2 hour kubespray run or something but it’s now a single yaml file and a ssh key if you use something like rke.

  • hombre_fatal 19 hours ago

    You are so used to the idiosyncrasies of k8s that you are probably blind to them. And you are probably so experienced with the k8s stack that you can easily debug issues so you discount them.

    Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config.

    I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering.

    Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack.

    It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex.

  • xp84 20 hours ago

    A few years ago, I set up a $40 k8s "cluster" which consisted of a couple of nodes, at DigitalOcean, and I set it up using this tutorial: https://www.digitalocean.com/community/tutorials/how-to-auto...

    I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it.

  • vanillax 21 hours ago

    I was gonna echo this. K8s is rather easy to setup. Certificates, domains, CICD ( flux/argo ) is where some completely comes in.. If anyone wants to learn more I do have a video I think is the most straight forward yet productionalized capable setup for hosting at home.

  • notnmeyer 21 hours ago

    i assume when people are talking about k8s complexity, it’s either more complicated scenarios, or they’re not talking about managed k8s.

    even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.

    if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.

    • stego-tech 21 hours ago

      > or they’re not talking about managed k8s

      Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.

      If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.

      Hence why I’m always eager to drive something like Canine!

      (EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)

      • esseph 19 hours ago

        I agree with all of this except for your bottom edit.

        Nutanix and others are helping a lot in this area. Also really like Talos and hope they keep growing.