More random home lab things I've recently learned
(chollinger.com)135 points by otter-in-a-suit 7 days ago
135 points by otter-in-a-suit 7 days ago
I just learned about the whole homelab thing a week ago; it's a much deeper rabbit hole than I expected. I'm planning to setup ProxMox today for the first time in fact and retire my Ubuntu Server setup running on a NUC that's been serving me well for last couple years.
I hadn't heard about mealie yet, but sounds like a great one to install.
You should definitely try mealie yes. On top of a good way to host your own recipes, the entire thing just feels...really well put together?
I'm not even using the features beyond the recipes yet, but i'm already very happy that i can migrate my recipes from google docs to over there
Data Hoarding is a bit more involved than just a homelab. Don't want your data hoard to go down or missing, whole you're labbing new techs and protocols.
I have Proxmox running on top of a clean Debian install on my NUC, I wanted to allow Plex to use the hardware decoding and it got a bit funny trying to do that with Plex running in a VM, so it runs on the host and I use VMs for other stuff
I have an Intel (12th Gen i5-12450H) mini-pc and at first had issues getting the GPU firmware loaded and working in Debian 12. However upgrading to Debian 13 (trixie) and doing apt update and upgrade resolved the issue and was able to pass the onboard Intel GPU through Docker to a Jellyfin container just fine. I believe the issue is related to older linux kernels and GPU firmware compatibility. Perhaps that’s your issue.
Here is an actual another league if you are curious about it https://youtu.be/-b3t37SIyBs
It’s another league, but I don’t get the point of mixing enterprise rack-mounts with Raspberry Pis.
You'd be delighted (or terrified) to know that I just added an old gaming computer in a 4U case to the cluster, so I can play with PCI/GPU passthrough.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.
>No space left on device.
>In other words, you can lock yourself out of PBS. That’s… a design.
Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.
>PiHole
AGH is worth considering because it has built in DoH
>Raspberry Pi 5, ARM64 Proxmox
Interesting. I'm leaning more towards k8s for integrating pis meaningfully
You seem knowledgeable so you may already know, but it's worth looking at the x86 mini PCs. Performance per watt has gotten pretty close on the newer low power CPUs (e.g. N150, unsure what AMD's line for that is), and performance per $ spent on hardware is way higher. I'm seeing 8GB Pi 5s with a power supply and no SD card for $100; you can get an N150 mini PC with 16GB of RAM and 500GB SSD pre-installed for like $160. Double the RAM, double the CPU performance, and comes with an SSD.
Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
The first thing I thought when I read this article was how raspberry pi’s just make this kind of thing more difficult and annoying compared to a regular normal PC, new (e.g. cheap mini PC) or used (e.g. used business workstation or just a plain desktop PC).
And if you want GPIO pins I’d imagine that a lot of those applications you’d be better served with an ESP32 and that a raspberry pi is essentially overkill for many of those use cases.
Yeah have a collection of minipc - they are indeed great. This build was more NAS focused. 9x SATA SSD and 6x NVME...minipcs just don't have the connectivity for that sort of thing
>Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
I have a bunch of rasp 4Bs that I'll use for a k8s HA control plane but yeah outside of that they're not idea. Especially with the fragility of SD card instead of nvme (unless you buy the silly HAT thing).
> My most recent learning - DDR4 ECC UDIMMs are comically expensive. To the point where I considered just replacing the entire platform with something RDIMM rather than swapping to ECC sticks.
DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.
>AGH is worth considering because it has built in DoH
Technitium has all the bells and whistles along with being cross platform.
Can somebody explain the whole proxmox thing? I haven’t used it, I use k3s.
I don’t get why people use VMs for stuff when there’s docker.
Thanks!
Primarily, docker isn't isolation. Where isolation is important, VMs are just better.
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.
Maybe my use case is abnormal, but I allocate the majority of my resources to a primary VM where I run everything, including containers, etc. but by running Proxmox now I can backup my entire server and even transfer it across the network. If I ever have some software to try out, I can do it in a new VM rather than on my main host. I can also ‘reboot’ my ‘server’ without actually rebooting the real computer, which meant less fan noises and interruption back when I used an actual rack mounted server at home.
Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
personally : proxmox /VM is great if You'd like to separate physical HW. In my case - virtualized TrueNAS means I can give it a whole SATA controller and keep this as an isolated storage machine.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
"Why would I need virtualization when I have Kubernetes".. sounds like a someone who has never had to update the K8s control plane and had everything go completely wrong. If it happens to you, you will be begging for an HVM with real snapshots.
Makes backups of the KVM VM running docker easy too right
I second the shout out for Mealie, it's very useful. Importing from URLs works very well, and it gives you a centralised place for all your recipes, without ads or filler content and safe from linkrot.
Not sure I understand the need to use a Raspberry Pi here. They're cool and all, but wouldn't a regular old PC be simpler to setup, maintain, and attach hardware to? It's a hobby--and you can do whatever you want, but I wouldn't involve a Pi in a home server setup unless I specifically needed something it bought me, like the small form factor, low power usage, GPIO pins and so on.
I always need lower power consumption. I'm in the UK and my power costs are $0.40 per kWh. Even running a raspberry pi 5 24/7 would cost me $25 per year
I just commented on this above, but I actually got for the Pi for free and it's a very capable device. I wouldn't buy one for this use case (nor do I really recommend it, but it _does_ work).
One of my favorite CyberPower perks is their RMCARDs for network monitoring: It's a separate module that works in basically all of their rackmount UPSes. You can replace the entire UPS without having to pay for the little mini web server again, it'll just pop right into the new unit.
It is a little hefty for a homelab-level setup, but the impressive bit to me is that they've kept compatibility with it longer than we've replaced UPSes at work (it looks like the RMCARD205 and 305 were introduced in 2018), so instead of paying for that hardware built into each unit, the RMCARD has been a one-time purchase we can bring from unit to unit.
APC UPSes that support their network management card add in - then the cards can frequently be purchased on eBay for $30-$100, you have to make sure the UPS supports the card, but they are excellent. Model numbers like ap9631 (if i recall). I run about 10 of them across different locations and they work great , some of them for 8+ years now. ( about a year ago when I got a new ups, Apc still offered the older firmware for download, however after a certain FW version it started going to a cloud subscription model - so be sure to keep the old firmware. If you’re worried about the firm or not being updated on the non-cloud version, just be sure to vlan/firewall control access to them which you should be doing anyway)
Good reminder for me to set up a UPS for my home lab before I go on vacation. . .
I've recently learned that "homelab" is a specific thing meaning you run certain software (like Proxmox), and not a generic term for running a 'server lab' at home.
Most “homelabs” are built by a developer LARPing as a sysadmin, with a user population of one (themselves) or zero for most of the features.
It’s the SUV that has off-road tires but never leaves the pavement, the beginner guitarist with an arena-ready amp, the occasional cook with a $5k knife. No judgment, everyone should do what they want, but the discussions get very serious even though the stakes are low.
LARPing as a sysadmin has a lot of benefits. It's taught me Ansible, Docker, Kubernetes, etc.
Which are all pretty useful considering my day job is a software engineer.
Many of these things have been directly applicable at work, e.g. when something weird happens in AWS, or we have a project using obscure Docker features.
I don't personally have a homelab, but I think that (unlike a giant amp or SUV) the homelab lets you learn interesting skills that would be hard to learn otherwise. It seems more defensible to me.
some people think it's not "homelabbing" unless you're doing things the way it's done at large scale. i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.
but proxmox and kubernetes are overkill, imo, for most homelab setups. setting them up is a good learning experience but not necessarily an appropriate architecture for maintaining a few mini PCs in a closet long term.
you can ignore the gatekeeping.
Homelabbing is a hobby for most people involved in it, and like other hobbies, some people dip their toes in it while others go diving in the deep end. But would you say it’s “overkill” for a hobbyist fisher to have multiple fishing poles? Or for a hobbyist painter to try multiple sets of paintbrushes? Or a hobbyist programmer to know multiple programming languages?
There’s a lot of overlap between “I run a server to store my photos” and “I run a bunch of servers for fun”, which has resulted in annoying gatekeeping (or reverse gatekeeping) where people tell each other they are “doing it wrong”, but on Reddit at least it’s somewhat being self-organized into r/selfhosted and r/homelab, respectively.
> i think these people are aiming to enter IT as a career and consider a homelab to be a resume project.
It's funny. I did this (before it really became a more mainstream hobby, this was early 00s), but now that I work in ops I barely even want to touch a computer after work.
k8s is definitely an overkill if your goal is not learning k8s.
proxmox is great, though. It's worth running it even if you treat it as nothing more than a BMC.
Run whatever you like!
I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).
I know others really enjoy playing with K8s, which is its own rabbit hole.
My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.
Where’d we get this term? I hear “home lab” and I think of having equipment to accomplish something new, not… running ordinary server software in fairly ordinary ways. Like Tony Stark designing his suits has a “home lab”. People 3D printing Warhammer figures or with a couple little servers running PiHole and Wireguard and such… not so much?
I’ve had one or two machines running serving stuff at home for a couple decades [edit: oh god, closer to 2.5 decades…], including serving public web sites for a while, and at no point would I have thought the term “home lab” was a good label for what I was doing.
> ignore current warnings - I’m using a MacBook Pro charger + cable and still got the warning that I need a 5V/5A PSU.
You need to be careful with this one.
The USB spec goes up to 15W (3A) for its 5V PD profiles, and the standard way to get 25W would be to use the 9V profile. I assume the Pi 5 lacks the necessary hardware to convert a 9V input to 5V, and, instead, the Pi 5 and its official power supply support a custom, out-of-spec 25W (5A) mode.
Using an Apple charger gets you the standard 15W mode, and, on 15W, the Pi can only offer 600mA for accessories, which may or may not be enough to power your NVMe. Using the 25W supply, it can offer 1.6A instead, which gives you plenty more headroom.