How to use Btrfs subvolumes: a complete and practical guide

Last update: 17/12/2025
Author Isaac
  • Btrfs combines file system, RAID and volume manager, with COW, snapshots, compression and native checksums.
  • Subvolumes are logical systems within Btrfs that allow you to separate /, /home, /var and manage snapshots by zones.
  • Multi-device support allows you to add/remove disks, change RAID profiles, and balance data online.
  • Tools like snapper or btrbk take advantage of Btrfs snapshots, but external copies are still necessary for real backups.

How to check the health of your hard drive with CrystalDiskInfo

If you come from ext4 or XFS and you're looking at Btrfs Looking puzzled is perfectly normal: it's not "just another file system," but rather a combination of file system, RAID, and volume manager. When you start seeing things like subvolumes, snapshots, load balancing, or scrubbing, it all seems complicated, but they're actually pieces that fit together perfectly.

In this article we will see How to use Btrfs subvolumes and what their purpose is In typical installations (for example, Arch, Debian, openSUSE, or NAS with Synology/QNAP), you'll see how these relate to snapshots, RAID, compression, and what this means for disk space. You'll also see examples of commands real-world applications for creating file systems, subvolumes, snapshots, and handling multiple disks, as well as some typical problems (the famous ENOSPC with "free" space).

What is Btrfs and why is it different?

Btrfs is a next-generation file system for GNU/Linux designed to scale from a simple SSD It supports multi-disk pools with redundancy, snapshots, and compression. It has been integrated into the kernel since version 2.6.29, and although its development is still active, the disk format is stabilized, so a Btrfs system created today will still be readable by future kernels.

Unlike more traditional systems such as ext4 or XFS, Btrfs integrates functions These functions were previously divided between the file system, software RAID (mdadm), and the logical volume manager (LVM). With a single stack, you have management of multiple devices, RAID profiles, snapshots, subvolumes, integrity checks, and more.

Key Btrfs principles: COW, snapshots, RAID and checksums

The centerpiece of Btrfs is the Copy-on-Write (COW) modelWhen you modify data, the system doesn't overwrite it in the same location; instead, it writes the new blocks elsewhere and updates the references. This allows for near-instantaneous cloning of files and subvolumes, as well as snapshots that are very inexpensive in terms of both time and space.

This design relies on the fact that each block of data and metadata carries its own checksum (usually CRC32C)When reading a block, Btrfs checks the checksum; if it doesn't match and another copy exists (for example, in RAID 1 or RAID 10), it reads the healthy replica and fixes the error on the fly. This protects against silent corruption, something that ext4 alone cannot detect.

In addition, Btrfs includes as standard equipment support for multiple devices with RAID0, RAID1, RAID10, RAID5, and RAID6 profiles, configurable separately for data (-d) and metadata (-m). Internally, it is not a classic block RAID like mdadm, but rather replication and striping at the "chunk" level managed by the file system itself.

Finally, Btrfs can apply transparent compression on the fly (zlib, lzo, zstd): Data is compressed before being written and decompressed upon reading, completely transparently to applications. This reduces the space consumed and, in many cases, even speeds up reads on slow or mechanical disks.

Creating a Btrfs file system

To start playing with subvolumes you first need a Btrfs file system on a deviceThe basic creation process is very similar to any MKFS file:

mkfs.btrfs /dev/sdb

This command initializes /dev/sdb with BtrfsNo RAID or anything like that required. You can use a whole disk or a partition. Then, you mount it at a mount point, for example:

mount /dev/sdb /mnt/btrfs

If you want from the beginning a multi-device systemYou simply add multiple devices to the creation command:

mkfs.btrfs /dev/sdc /dev/sdd

In this case, Btrfs creates a logical pool with aggregated capacity of the devices (by default, data in "single" and metadata duplicated as RAID1, unless you specify otherwise). When you mount it, you will only need to specify one of the devices, for example /dev/sdc.

Native RAID profiles and dynamic pool management

When creating the file system, you can already specify the redundancy profile for data and metadata with the -dy -m options: values ​​such as raid0, raid1, raid10, raid5, raid6 or single. A typical example of RAID1 in Btrfs would be:

mkfs.btrfs -d raid1 -m raid1 /dev/sdc /dev/sdd

Unlike a classic RAID1, in Btrfs You don't have identical mirrors block by block.What is guaranteed is that each block of data or metadata has at least two copies on different devices, but there are no "twin" disks as such, and different disk sizes are also allowed.

  How to Fix DPC Watchdog Violation Error in Windows 10

Once the system is assembled, you can Add and remove disks from the pool while hot. using commands such as:

btrfs device add /dev/sdb /mnt/btrfs
btrfs device delete /dev/sdc /mnt/btrfs

After adding devices, it's usually worthwhile to launch a data balancing to redistribute existing chunks across all disks:

btrfs filesystem balance /mnt/btrfs

This balancing relocates blocks according to the chosen RAID profile, so that all disks collaborate and space is used more evenlyIt is an online operation: the file system remains accessible while it is running.

How to switch from single to RAID1 without reinstalling

One of the great things about Btrfs is that you can convert a system already in use to another RAID profile without unmounting or reinstalling. Imagine you have your root in Btrfs on a single disk /dev/sda1 and you want to switch to RAID1 by adding /dev/sdb1.

The steps would be to add the new device and run a load balancing with profile conversion:

btrfs device add /dev/sdb1 /
btrfs balance start -dconvert=raid1 -mconvert=raid1 /

During this process, Btrfs will go relocating the chunks so that all data and metadata are duplicated on both disks. Upon completion, a “btrfs filesystem show /” will display two devices with similar usage and “btrfs filesystem df /” will reflect the RAID1 profiles for Data, Metadata, and System.

RAID degradation and repair with Btrfs

If in a configuration with Native RAID1 one of the disks failsThe system enters degraded mode. In Btrfs, to set up that system you need the option:

mount -o degraded /dev/sdb /mnt/btrfs

While the system is in this state, Btrfs will continue to serve data from the healthy copies, but It will not have full redundancyThe correct procedure is to add a new disk, remove the "missing" device from the pool, and run a load balance to re-replicate the blocks.

In situations of occasional corruption or problematic disks, you also have the option of launch a scrub, which scans all blocks, checks checksums, and repairs from the good copy if possible:

btrfs scrub start /mnt/btrfs
btrfs scrub status /mnt/btrfs

This scrub runs in the background while the system is mounted, so it's advisable Schedule it for off-peak hours., especially on servers or NAS.

Transparent compression: zlib, lzo, zstd

Assembling the system with the option compress or compress= You activate automatic compression of data written from that point forward. For example:

mount -o compress=zstd /dev/sdb /mnt/btrfs

The system will store the compressed blocks on disk, but when you do an ls or a stat you will still see the uncompressed logical sizeThe gain is noticeable in tools like df (less space used) or in “btrfs filesystem df”, or by using utilities like compsize.

The available algorithms offer a different balance between compression ratio and CPU consumption:

  • zlib: compresses a lot, but is slower; useful for very static data.
  • lsoVery fast, but compresses less; a good option for systems with a lot of writing.
  • zstd: a very interesting balance, with several configurable levels.

You can also adjust the compression level In the mount settings, for example, `zstd:1` prioritizes speed, while `zstd:15` prioritizes maximum performance at the expense of CPU usage. These parameters are reflected in `/proc/mounts` and in `dmesg` messages when the system is mounted.

Copy-on-Write to files: instant clones with reflinks

The file-level COW mechanism is exposed, for example, through the –reflink=always cp optionWhen copying a large file into Btrfs using reflink, the data is not duplicated on disk; only additional references to the same blocks are created.

cp --reflink=always imagen.iso copia1.iso

The copy appears immediately and occupies zero additional space at the beginningOnly when one of the files is modified are the affected blocks physically copied (true copy-on-write). This is extremely useful for templates. Virtual machines, base images, etc.

  Fix NET ERR_CERT_REVOKED Error in Chrome

They follow the same logic subvolume snapshotsThese are simply COW clones of an entire subvolume. When created, they barely occupy any metadata; the actual consumption will come later, depending on how different the contents are between the original and the snapshot.

Btrfs subvolumes: what they are and what they are used for

A subvolume in Btrfs is, in practice, a “mini file system” within the main systemThe complete Btrfs always has at least one sub-volume (the top level), and on top of it you can create others, which from the outside look like simple directories.

The beauty of it is that each subvolume can be assemble independentlyto have their own snapshots, quotas (with the quota system enabled) and policies backupAll of this by sharing the same pool of storage and the same RAID configuration as the parent volume.

To create multiple subvolumes on top of a Btrfs already mounted in /mnt/btrfs, you would do something like this:

btrfs subvolume create /mnt/btrfs/subvolumen1
btrfs subvolume create /mnt/btrfs/subvolumen2
btrfs subvolume create /mnt/btrfs/subvolumen3

These subvolumes will appear as directories in /mnt/btrfs, but if you list them with btrfs you will see their IDs and internal routes:

btrfs subvolume list /mnt/btrfs

Once you have the ID (for example, 256 for subvolume1), you can assemble only that sub-volume on another route:

mount -o subvolid=256 /dev/sdb /mnt/subvol1

In this way, the data in that subvolume can be accessed as /mnt/btrfs/subvolume1 and /mnt/subvol1 simultaneously, but it's the same content. This is very useful for separating /, /home, /var, or specific datasets without resorting to traditional partitions.

Subvolume design: practical examples (Arch, Debian, NAS)

An very typical structure On a desktop machine it can be:

  • @ for the root (/) without /home or /var/lib/docker.
  • @home to /home with your personal data.
  • @var or specific subvolumes such as @var-log, @var-cache, etc.
  • @snapshots to store snapshots organized by date.

This allows a tool like snapper for system snapshotsTimeshift or btrbk can create snapshots only of the system (subvolume @) to roll back updates, while user documents (in @home) are protected or synchronized with a different logic.

On a NAS with Synology or QNAP, what is actually done is to create shared folders on Btrfs subvolumesSo Snapshot Replication can take snapshots of those folders, clone them, or replicate them to another NAS. The same idea, but packaged in a graphical interface.

Subvolume snapshots: rollback and copies in seconds

The snapshots in Btrfs are simply subvolumes created as a snapshot of anotherTo create a snapshot of /mnt/btrfs/subvolume1 in another subvolume, use snapshot_sub1:

btrfs subvolume snapshot \
  /mnt/btrfs/subvolumen1 \
  /mnt/btrfs/subvolumen1_snapshot

If you then run `btrfs subvolume list`, you'll see a new ID and path for this snapshot. From that point on, They are both logically independentChanges to one do not affect the other, even though they internally share blocks until they are modified.

This is used for both fast backups (for example, before a system upgrade) such as for labs, testing, development environments, etc. Restoring a snapshot usually boils down to mounting the snapshot and using it as the new default subvolume, or copying the data back.

Using Btrfs on multiple disks with RAID

With Btrfs, multi-disk management is handled directly from the file system. When creating a volume, you can define separate profiles for data and metadata, for example. Data in RAID5 and metadata in RAID1 If you have three or more devices:

mkfs.btrfs -d raid5 -m raid1 /dev/sdb /dev/sdc /dev/sdd

This type of profile offers stripping and parity for data (better reads and fault tolerance) and strong redundancy for metadata, at the cost of some extra space. However, RAID 5/6 in Btrfs has historically had significant bugs; if the environment is critical, many administrators still prefer RAID 1/10.

If you initially created a RAID1 pool and later want to make better use of the space, you can always convert profiles on the fly with balance:

btrfs balance start -dconvert=raid5 -mconvert=raid1 /srv

Similarly, adding or removing devices is done with btrfs device add/remove and a subsequent balancing to relocate chunks. The command “btrfs filesystem show” gives you an overview of the status, size, and usage of each disk in the pool.

  Windows 10 Mail Not Working. Causes, Solutions and Alternatives

Space management: why du and df “lie” in Btrfs

One of the most confusing aspects of Btrfs is that The classic df and du tools do not accurately reflect realityThe combination of RAID at the chunk level, COW, snapshots, and compression means that the assumptions of "1 MiB written = 1 MiB occupied" do not hold true.

At Btrfs, space is reserved in large “slices” (chunks)Typically, 1 GiB is used for data and 256 MiB for metadata. Each chunk is assigned a corresponding profile: single, RAID1, RAID10, etc. Within a chunk, yes, 1 MiB written is 1 MiB used, but the way multiple chunks and block copies are combined makes this simple calculation difficult.

Additionally, with COW and snapshots, deleting a file It does not guarantee that its blocks will be released.If those blocks are shared with another snapshot or reflink clone, they will continue to occupy space until no reference to them remains. Hence the feeling of "I delete things and the space doesn't come back."

For this reason, Btrfs provides specific commands such as “btrfs filesystem df “which show a breakdown of Data, Metadata, and System usage by profile, and “btrfs filesystem show ”, which shows what part of each disc is reserved and used in its raw form.

Best practices with subvolumes and snapshots in everyday use

If you're riding, for example, Arch Linux with Btrfs on a single SSDA highly recommended strategy is:

  • Create a unique Btrfs on the root partition.
  • Define separate subvolumes for / (system), /home, /var/log, maybe /var/lib/libvirt or /var/lib/docker.
  • Mount each subvolume with appropriate compression options (zstd or lzo) and, if desired, autodefrag for loads with many small files.
  • Use Snapper, Timeshift or btrbk to automate snapshots of the system subvolume before and after the updates.

On a physical machine with multiple SSDs, you can choose to a single Btrfs multi-device with RAID1/10 and logical subvolumes, or with several separate Btrfs. For subvolumes, not much changes: they remain logical entities within the Btrfs volume they reside in; the important thing is to define from the outset which data you want to be able to snapshot and revert independently.

Regarding full backups, keep in mind that snapshots are not external backupsSnapper or Timeshift are usually used for local snapshots (mainly of the system), and for real backups of everything (including /home) it is advisable to rely on tools such as borg, restic, rsync, remote btrbk or the NAS's own snapshot/replication solutions.

Basic administration and useful commands of Btrfs

The Btrfs CLI may seem a bit verbose, but it allows abbreviate subcommands using unique prefixesFor example, instead of “btrfs filesystem defragment /” you can use “btrfs fi de /” if there is no ambiguity.

Some essential commands that you will use frequently are:

  • mkfs.btrfs: create file systems, with RAID options and labels.
  • btrfs filesystem show/df: View devices, usage, and profiles.
  • btrfs device add/delete: add or remove disks from the pool.
  • btrfs balance start: rebalance data and convert RAID profiles.
  • btrfs scrub start/status: verify and repair blocks with checksums.
  • btrfs subvolume create/list/delete/snapshot: manage subvolumes and snapshots.
  • btrfs check: Check the file system offline (with caution).

It is also important to remember that Traditional fsck should not be used. to a Btrfs in each BootIn fstab, the last field of the Btrfs entries must be 0 to avoid improper automatic checks.

With this set of concepts and commands, Btrfs ceases to be that "weird black box" and becomes a very flexible tool With it you can organize your system logically with subvolumes, take advantage of truly useful snapshots and combine all of this with redundancy and compression, both on a simple laptop and on a pool of multiple disks.

Btrfs
Related article:
Btrfs Snapshots with Snapper: Recover Your System and Data on Linux