Comment by srcreigh
Comment by srcreigh 7 hours ago
Can somebody explain the whole proxmox thing? I haven’t used it, I use k3s.
I don’t get why people use VMs for stuff when there’s docker.
Thanks!
Comment by srcreigh 7 hours ago
Can somebody explain the whole proxmox thing? I haven’t used it, I use k3s.
I don’t get why people use VMs for stuff when there’s docker.
Thanks!
> Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)
Maybe my use case is abnormal, but I allocate the majority of my resources to a primary VM where I run everything, including containers, etc. but by running Proxmox now I can backup my entire server and even transfer it across the network. If I ever have some software to try out, I can do it in a new VM rather than on my main host. I can also ‘reboot’ my ‘server’ without actually rebooting the real computer, which meant less fan noises and interruption back when I used an actual rack mounted server at home.
For my home archive NAS boxes, Proxmox is just a Debian distro with selective (mostly virtualization) things more up to date, and has ZFS and a web UI out of the box.
I disable the high availability stuff I don’t use that otherwise just grinds away at disks because of all the syncing it does.
It has quirks to work through, but at this point for me dealing with it is fairly simple, repeatable and most importantly, low effort/mental overhead enough for my few machines without having to go full orchestration, or worse, NixOS.
Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
"Why would I need virtualization when I have Kubernetes".. sounds like a someone who has never had to update the K8s control plane and had everything go completely wrong. If it happens to you, you will be begging for an HVM with real snapshots.
Makes backups of the KVM VM running docker easy too right
personally : proxmox /VM is great if You'd like to separate physical HW. In my case - virtualized TrueNAS means I can give it a whole SATA controller and keep this as an isolated storage machine.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.