5333 private links
I have a very large external drive that I want to use for backups. Part of the backups are for Windows partitions that need to be accessible from Windows, part are backups of some Linux partitons.
...
By default, a single copy of user data is stored, and two copies of file system metadata is stored. By increasing copies, you adjust this behavior such that copies copies of user data (within that file system) is stored, and copies plus one copies of system metadata (within that file system) is stored. For best effect, if you want to set copies to a value greater than one, you should do so when you create the pool using zpool create -O copies=N, to ensure that additional copies of all root file system metadata is stored. //
Under normal read operations, extra copies only consume storage space. When a read error occurs, if there are redundant, valid copies of the data, those redundant copies can be used as an alternative in order to satisfy the read request and transparently rewrite the broken copy. //
However, during writes, all copies must be updated to ensure that they are kept in sync. Because ZFS aims to place copies far away from each other, this introduces additional seeking. Also don't forget about its Merkle tree design, with metadata blocks placed some physical distance away from the data blocks (to guard against for example a single write failure corrupting both the checksum and the data). I believe that ZFS aims to place copies at least 1/8 of the vdev away from each other, and the metadata block containing the checksum for a data block is always placed some distance away from the data block.
Consequently, setting copies greater than 1 does not significantly help or hurt performance while reading, but reduces performance while writing in relation to the number of copies requested and the IOPS (I/O operations per second) performance of the underlying storage.