Comment by LorenPechtel
Comment by LorenPechtel 3 days ago
RAID. Preferably RAID 6. Much, much better to build a system to survive failure than to prevent failure.
Comment by LorenPechtel 3 days ago
RAID. Preferably RAID 6. Much, much better to build a system to survive failure than to prevent failure.
And 'softRAID', like what is on for free on Intel motherboards or AMD Motherboards suck and should be avoided.
------
The best advice I can give is to use a real solution like ZFS, Storage Spaces and the like.
It's not sufficient to say 'Use RAID' because within the Venn Diagram of things falling under RAID is a whole bunch of shit solutions and awful experiences.
I haven't seen a machine shipped with firmware RAID in decades.
It's still enabled in the firmware of some vendors' laptops -- ones deep in Microsoft's pockets, like Dell, who personally I would not touch unless the kit were free, but gullible IT managers buy the things.
My personal suspicion is that it's an anti-Linux measure. It's hard to convert such a machine to AHCI mode without reformatting unless you have more clue than the sort of person who buys Dell kit.
In real life it's easy: set Windows to start in Safe Mode, reboot, go into the firmware, change RAID mode to AHCI, reboot, exit Safe Mode.
Result, Windows detects a new disk controller and boots normally, and now, all you need to do is disable Bitlocker and you can dual-boot happily.
However that's more depth of knowledge than I've met in a Windows techie in a decade, too.
To be fair, your statement could be edited as follows to increase its accuracy:
> btrfs is quite infamous for eating your data.
This is the reason for the slogan on the bcachefs website:
"The COW filesystem for Linux that won't eat your data".
After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.
Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.
IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.
The fact that its RAID is even more unstable merely seals the deal.
This is incorrect, quoting Linux 6.7 release (Jan 2024):
"This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation."
I've not kept with more recent releases but there has been progress on the issue
I believe RAID5/6 is still experimental (although I believe the main issues were worked out in early 2024), I've seen reports of large arrays being stable since then. It's still recommended to run metadata in raid1/raid1c3.
RAID0/1/10 has been stable for a while.
Software or hardware, it's still the same basic concept.
Redundancy rather than individual reliability.
Don't RAID these days. Software won rather drastically, likely because CPUs are finally powerful enough to run all those calculations without much of a hassle.
Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID.
Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues.