Comment by dragontamer

Comment by dragontamer 3 days ago

13 replies

Don't RAID these days. Software won rather drastically, likely because CPUs are finally powerful enough to run all those calculations without much of a hassle.

Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID.

Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues.

lproven 3 days ago

> Don't RAID these days. Software won rather drastically

RAID does not mean or imply hardware RAID controllers, which you seem to incorrectly assume.

Software RAID is still 100% RAID.

  • dragontamer 2 days ago

    And 'softRAID', like what is on for free on Intel motherboards or AMD Motherboards suck and should be avoided.

    ------

    The best advice I can give is to use a real solution like ZFS, Storage Spaces and the like.

    It's not sufficient to say 'Use RAID' because within the Venn Diagram of things falling under RAID is a whole bunch of shit solutions and awful experiences.

    • lproven 2 days ago

      I haven't seen a machine shipped with firmware RAID in decades.

      It's still enabled in the firmware of some vendors' laptops -- ones deep in Microsoft's pockets, like Dell, who personally I would not touch unless the kit were free, but gullible IT managers buy the things.

      My personal suspicion is that it's an anti-Linux measure. It's hard to convert such a machine to AHCI mode without reformatting unless you have more clue than the sort of person who buys Dell kit.

      In real life it's easy: set Windows to start in Safe Mode, reboot, go into the firmware, change RAID mode to AHCI, reboot, exit Safe Mode.

      Result, Windows detects a new disk controller and boots normally, and now, all you need to do is disable Bitlocker and you can dual-boot happily.

      However that's more depth of knowledge than I've met in a Windows techie in a decade, too.

f_devd 3 days ago

FYI XFS is not redundant, also RAID usually refers to software RAID these days.

I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work.

  • zozbot234 3 days ago

    btrfs RAID is quite infamous for eating your data. Has it been fixed recently?

    • lproven 2 days ago

      To be fair, your statement could be edited as follows to increase its accuracy:

      > btrfs is quite infamous for eating your data.

      This is the reason for the slogan on the bcachefs website:

      "The COW filesystem for Linux that won't eat your data".

      https://bcachefs.org/

      After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

      Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

      IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.

      The fact that its RAID is even more unstable merely seals the deal.

      • f_devd 2 days ago

        > Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.

        > In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.

        While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk.

        May seem daunting but it's actually only about 6 commands.

    • cerved 2 days ago

      No. RAID 5/6 is still fundamentally broken and probably won't get fixed

      • f_devd 2 days ago

        This is incorrect, quoting Linux 6.7 release (Jan 2024):

        "This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation."

        I've not kept with more recent releases but there has been progress on the issue

    • f_devd 3 days ago

      I believe RAID5/6 is still experimental (although I believe the main issues were worked out in early 2024), I've seen reports of large arrays being stable since then. It's still recommended to run metadata in raid1/raid1c3.

      RAID0/1/10 has been stable for a while.

LorenPechtel a day ago

Software or hardware, it's still the same basic concept.

Redundancy rather than individual reliability.