Comment by lproven
To be fair, your statement could be edited as follows to increase its accuracy:
> btrfs is quite infamous for eating your data.
This is the reason for the slogan on the bcachefs website:
"The COW filesystem for Linux that won't eat your data".
After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.
Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.
IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.
The fact that its RAID is even more unstable merely seals the deal.
> Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.
> In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.
While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk.
May seem daunting but it's actually only about 6 commands.