cross-posted from: https://lemm.ee/post/4274796
Just wanted to share some love for this filesystem.
I’ve been running a btrfs raid1 continuously for over ten years, on a motley assortment of near-garbage hard drives of all different shapes and sizes. None of the original drives are still in it, and that server is now on its fourth motherboard. The data has survived it all!
It’s grown to 6 drives now, and most recently survived the runtime failure of a SATA controller card that four of them were attached to. After replacing it, I was stunned to discover that the volume was uncorrupted and didn’t even require repair.
So knock on wood — I’m not trying to tempt fate here. I just want to say thank you to all the devs for their hard work, and add some positive feedback to the heap since btrfs gets way more than it’s fair share of flak, which I personally find to be undeserved. Cheers!
Agreed, RAID 1 (and 10) are pretty stable.
Moderately fun fact, RAID 1 in BTRFS is not really RAID 1 in the traditional sense. Rather it’s a guarantee that your data lives on two separate drives. You don’t know which ones though. You could have one copy of everything on a 12TB drive, whith various secondary copies distrivlbuted on three 4TB drives.
Traditional RAID 1 works ONLY with two drives, with a capacity of the smaller drive as upper limit. The way to extend a traditional RAID 1 array is by adding two new drives and creating a RAID 10 with all four. (Multiple RAID 1 striped)This right here is what has made it so flexible for me to reuse salvaged equipment. You can just chuck a bunch of randomly sized drives at it, and it will give you as much storage as it can while guaranteeing you can lose any one drive. Fantastic.
Anything specific advice you would give to others to prevent corruption? Or keep drives healthy?
Schedule a monthly scrub (with the foreground option), and make sure you get notified if the exit code is non-zero
https://btrfs.readthedocs.io/en/latest/btrfs-scrub.html
I also have a weekly balance scheduled to keep block groups compact, although if you don’t frequently delete files this may not be necessary IMO
I’m not sure I know enough to be giving out advice, but I can tell you what I do. I do have a cron job to run scrub, to keep the bitrot away. I also tend to replace my drives proactively when they get REALLY old — the flexibility of btrfs raid1 lets me do that one drive at a time instead of two, making it much more affordable. You can plan out your storage with the btrfs calculator.
I’m glad it’ working well for you, but I don’t think it’ true to say that btrfs gets beyond its fair share of flak. It gets the exactly correct amount of flak for what it is. Every place I have worked at that wanted to deploy a COW fs on like, a NAS or server, has always gone with zfs. btrfs is such a mess it never even enters the conversation. Even if it can have its bugs ironed out, the bcache dev was right in pointing out that its on disk formats are poorly designed for their job, and cannot be revised except in a new version of the entire fs. I hope bcachefs gets merged into the kernel next year, that’s a filesystem I would actually trust with my data.
Btrfs does get a lot of flak based on hearsay or experiences that are out of date. It works well in a lot of scenarios and is used a lot now, ZFS is also a good fs for many use cases, especially in enterprise situations.
I can’t comment on the on-disk formats as I have no experience there but Btrfs works well in a lot of use cases for for a lot of users.
Bcachefs sounds promising but it does have a long way to go and will need a lot of testing. It’s getting into the kernel to get more testing mileage on it and encourage more developers, it only have one guy working on it (except for the casefolding submission) which is a big problem for both present and future. Hopefully it’ll get more devs interested.
Never trust any filesystem, or the storage media. Consider anything that holds your data to be fallible.
I’ve been using it in Fedora since they switched to it as the default FS. I have not done anything special. I am not trying anything fancy except
compress-force=zstd:1
. Seems good to me!Why just :1? The default is :3 and looking at the timings for zstd deflate speed vs compression level (Google for it … ), becomes slow at around 7.
Don’t mean shit to me but suggest you reconsider.
Slow relative to what? Any zstd compression, while really fast, will be slower than native write speeds to my nvme. A tiny bit of ratio gain isn’t worthwhile to me.
I use it on my steam deck microsd to cram more shit in via compression. Main drive is left as ext4 though so case folding can be used for particularly janky windows games or mods.
deleted by creator
I’ve been wanting to build a raid for a while, what raid controller do you use/would you recommend?
For a software RAID like this, you don’t want a hardware RAID controller, per se – you just want a bunch of ports. After my recent controller failure, I decided to try one of these. It’s slick as hell, sitting close to the motherboard, and seems rock solid so far. We’ll see!
Yeah BTRFS is way more reliable than Ext4. A simple power failure or other hardware fuckup with Ext4 and you can be sure all your data is gone, with BTRFS your data will survive a lot of shit.
That’s ridiculously exaggerated and you know it.
Yes it is :) but comparatively I’ve never lost a volume / disk to BTRFS in years of the same scenarios.
My experience says otherwise.
Ext4 is rock solid and will survive power loss without a problem.
I love btrfs for the compression and snapshot capabilities, but it still has a long way to go to reach ext4 maturity.
That’s not a shot at btrfs, it’s just that filesystem maturity and reliability take time.
I can’t share your enthusiasm about Ext4’s safety. I’ve had multiple disks lost to simple power failures at home and more complex hardware failures at datacenters. At the time I migrated to XFS - which also always performed better than Ext4 when things failed - and then moved to BTRFS when become mostly stable.
I’ve been using Ext4 for over 150 years now and I never had any issues with it. I did not only survive multiple power failures but also house fire, couple direct hits with EMP and a zombie apocalypse.
I had the exact opposite experience. A power loss destroyed my btrfs boot drive, it couldn’t be mounted anymore