5333 private links
As you can see in the chart above, btrfs-raid1 differed pretty drastically from its conventional analogue. To understand how, let's think about a hypothetical collection of "mutt" drives of mismatched sizes. If we have one 8T disk, three 4T disks, and a 2T disk, it's difficult to make a useful conventional RAID array from them—for example, a RAID5 or RAID6 would need to treat them all as 2T disks (producing only 8T raw storage before parity).
However, btrfs-raid1 offers a very interesting premise. Since it doesn't actually marry disks together in pairs, it can use the entire collection of disks without waste. Any time a block is written to the btrfs-raid1, it's written identically to two separate disks—any two separate disks. Since there are no fixed pairings, btrfs-raid1 is free to simply fill all the disks at the same rough rate proportional to their free capacity. //
As any storage administrator worth their salt will tell you, RAID is primarily about uptime. Although it may keep your data safe, that's not its real job—the job of RAID is to minimize the number of instances in which you have to take the system down for extended periods of time to restore from proper backup.
Once you understand that fact, the way btrfs-raid handles hardware failure looks downright nuts. What happens if we yank a disk from our btrfs-raid1 array above? //
Btrfs' refusal to mount degraded, automatic mounting of stale disks, and lack of automatic stale disk repair/recovery do not add up to a sane way to manage a "redundant" storage system. //
Believe it or not, we've still only scratched the surface of btrfs problems. Similar problems and papercuts lurk in the way it manages snapshots, replication, compression, and more. Once we get through that, there's performance to talk about—which in many cases can be orders of magnitude slower than either ZFS or mdraid in reasonable, common real-world conditions and configurations.