Btrfs#Cloning
{{Short description|Copy-on-write file system}}
{{Use dmy dates|date=January 2025}}
{{Infobox filesystem
| full_name = B-tree file system
| name = Btrfs
| license = GNU GPL
| developer = SUSE, Meta, Western Digital, Oracle Corporation, Fujitsu, Fusion-io, Intel, The Linux Foundation, Red Hat, and Strato AG{{cite web |url=https://btrfs.readthedocs.io/en/latest/Contributors.html |title=Contributors at BTRFS documentation |date=15 June 2022 |website=Btrfs.ReadTheDocs.io |access-date=5 December 2022 }}
| introduction_os = Linux kernel 2.6.29
| introduction_date = {{Start date and age|2009|03|23}}
| partition_id = {{unbulleted list
| MBR: 0x83: Linux native filesystem
| GPT: 0FC63DAF-8483-4772-8E79-3D69D8477DE4: Linux native filesystem{{Cite web|url=https://wiki.archlinux.org/index.php/GPT_fdisk|title = GPT fdisk - ArchWiki}}
}}
| directory_struct = B-tree
| file_struct = Extents
| bad_blocks_struct = None recorded
| max_filename_size = 255 ASCII characters (fewer for multibyte character encodings such as Unicode)
| max_files_no = 264{{Efn|name="maximum-files"}}
| max_volume_size = 16 EiB{{Efn|name="kernel-limits"}}
| max_file_size = 16 EiB{{Efn|name="kernel-limits"}}
| filename_character_set = All except '/'
and NUL
('\0'
)
| dates_recorded = Creation (otime),{{cite web |date=26 July 2010 |url=https://lwn.net/Articles/397442/ |title=File Creation Times |access-date=15 August 2015 |first=Jonathan |last=Corbet |website=LWN.net }} modification (mtime), attribute modification (ctime), and access (atime)
| date_range = 64-bit signed int offset from 1970-01-01T00:00:00Z{{cite web
| url = https://btrfs.wiki.kernel.org/index.php/On-disk_Format#Basic_Structures
| title = On-disk Format - btrfs Wiki
| website = btrfs.wiki.kernel.org}}
| date_resolution = Nanosecond
| forks_streams =
| attributes = POSIX and extended attributes
| file_system_permissions = Unix permissions, POSIX ACLs
| compression = Yes (zlib, LZO{{cite web
| url = https://btrfs.wiki.kernel.org
| title = btrfs Wiki
| work = kernel.org
| access-date = 19 April 2015}} and (since 4.14) ZSTD{{cite web
| url = https://kernelnewbies.org/Linux_4.14
| title = Linux_4.14 - Linux Kernel Newbies
| website = kernelnewbies.org}})
| single_instance_storage = Yes{{cite web
| url = https://btrfs.readthedocs.io/en/latest/Deduplication.html
| title = Deduplication
| work = Btrfs.ReadTheDocs.io
| access-date = 19 April 2015}}
| copy_on_write = Yes
| OS = Linux, Windows,{{cite web
| url = https://github.com/maharmstone/btrfs
| title = Windows Driver on GitHub.com
| website = GitHub
| access-date = 10 January 2023
| url = https://reactos.org/project-news/reactos-041-released
| title = ReactOS 0.4.1 Released
| work = reactos.org
| access-date = 11 August 2016}}
| website = {{Official URL}}
}}
Btrfs (pronounced as "better F S", "butter F S",{{Cite web |url=http://streaming.oracle.com/ebn/podcasts/media/20209545_Oracle-Linux-7.mp4 |title=Oracle Linux 7 Q&A with Wim Coekaerts |website=Oracle |access-date=6 February 2016 |archive-date=18 August 2016 |archive-url=https://web.archive.org/web/20160818163705/http://streaming.oracle.com/ebn/podcasts/media/20209545_Oracle-Linux-7.mp4 | time = 1m 15s |url-status=dead }}{{cite video |first=Valerie |last=Henson |title=Chunkfs: Fast File System Check and Repair |date=31 January 2008 |url=http://mirror.linux.org.au/pub/linux.conf.au/2008/Thu/mel8-262.ogg |time=18m 49s |quote=It's called Butter FS or B-tree FS, but all the cool kids say Butter FS |location=Melbourne, Australia |access-date=5 February 2008}} "b-tree F S", or "B.T.R.F.S.") is a computer storage format that combines a file system based on the copy-on-write (COW) principle with a logical volume manager (distinct from Linux's LVM), developed together. It was created by Chris Mason in 2007{{cite web|first=Jim |last=Salter|access-date=11 June 2023|title=Examining btrfs, Linux's perpetually half-finished filesystem|url=https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/|date=24 September 2021|website=Ars Technica|quote=Chris Mason is the founding developer of btrfs, which he began working on in 2007 while working at Oracle. This leads many people to believe that btrfs is an Oracle project—it is not. The project belonged to Mason, not to his employer, and it remains a community project unencumbered by corporate ownership to this day.}} for use in Linux, and since November 2013, the file system's on-disk format has been declared stable in the Linux kernel.{{cite web |title=Linux kernel commit changing stability status in fs/btrfs/Kconfig |url=https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4204617d142c0887e45fda2562cb5c58097b918e |access-date=8 February 2019 }}
Btrfs is intended to address the lack of pooling, snapshots, integrity checking, data scrubbing, and integral multi-device spanning in Linux file systems. Mason, the principal Btrfs author, stated that its goal was "to let [Linux] scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable".{{cite web |title=A Better File System for Linux? |first=Sean Michael |last=Kerner |date=30 October 2008 |access-date=27 August 2020 |website=InternetNews.com |url=http://www.internetnews.com/dev-news/article.php/3781676/A+Better+File+System+for+Linux.htm |archive-url=https://web.archive.org/web/20110408185904/http://www.internetnews.com/dev-news/article.php/3781676/A%20Better%20File%20System%20for%20Linux.htm |archive-date=8 April 2011 |url-status=live }}
History
File:Btrfs filesystem usage screenshot.png
The core data structure of Btrfs{{mdashb}}the copy-on-write B-tree{{mdashb}}was originally proposed by IBM researcher Ohad Rodeh at a USENIX conference in 2007. Mason, an engineer working on ReiserFS for SUSE at the time, joined Oracle later that year and began work on a new file system based on these B-trees.
In 2008, the principal developer of the ext3 and ext4 file systems, Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance; it uses old technology and is a stop-gap. Ts'o said that Btrfs is the better direction because "it offers improvements in scalability, reliability, and ease of management".{{cite web |last=Paul |first=Ryan |date=13 April 2009 |url=https://arstechnica.com/open-source/news/2009/04/linux-collaboration-summit-the-kernel-panel.ars |title=Panelists Ponder the Kernel at Linux Collaboration Summit |access-date=22 August 2009 |website=Ars Technica|archive-url=https://web.archive.org/web/20120617204105/http://arstechnica.com/information-technology/2009/04/linux-collaboration-summit-the-kernel-panel/ |archive-date=17 June 2012 |url-status=dead}} Btrfs also has "a number of the same design ideas that reiser3/4 had".{{cite mailing list | title = Re: reiser4 for 2.6.27-rc1 | first = Theodore |last=Ts'o | url = https://lkml.org/lkml/2008/8/1/217 | date = 1 August 2008 | access-date = 31 December 2010 | mailing-list = linux-kernel }}
Btrfs 1.0, with finalized on-disk format, was originally slated for a late-2008 release,{{cite web | url = http://btrfs.wiki.kernel.org/index.php/Development_timeline |url-status= dead | title = Development timeline | work= Btrfs wiki | date = 11 December 2008 | access-date = 5 November 2011 | archive-date= 20 December 2008 |archive-url= https://web.archive.org/web/20081220083235/http://btrfs.wiki.kernel.org/index.php/Development_timeline }} and was finally accepted into the Linux kernel mainline in 2009.{{cite news | url = http://www.linux-magazine.com/Online/News/Kernel-2.6.29-Corbet-Says-Btrfs-Next-Generation-Filesystem | title = Kernel 2.6.29: Corbet Says Btrfs Next Generation Filesystem |first= Britta |last=Wuelfing |work= Linux Magazine | date = 12 January 2009 | access-date= 5 November 2011 }} Several Linux distributions began offering Btrfs as an experimental choice of root file system during installation.{{cite web |title=Red Hat Enterprise Linux 6 documentation: Technology Previews |url=http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Technical_Notes/storage.html#id4452791 |access-date=21 January 2011 |archive-url=https://web.archive.org/web/20110528160211/http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Technical_Notes/storage.html |archive-date=28 May 2011 }}{{cite web |title=Fedora Weekly News Issue 276 |url=http://fedoraproject.org/wiki/FWN/LatestIssue#What.27s_new_in_Fedora_15_.28Lovelock.29.3F |date=25 May 2011}}{{cite press release |url=http://www.debian.org/News/2011/20110205a.en.html |title=Debian 6.0 "Squeeze" released |date=6 February 2011 |publisher=Debian |quote=Support has also been added for the ext4 and Btrfs filesystems... |access-date=8 February 2011 }}
In July 2011, Btrfs automatic defragmentation and scrubbing features were merged into version 3.0 of the Linux kernel mainline. Besides Mason at Oracle, Miao Xie at Fujitsu contributed performance improvements.{{cite news |title=Kernel Log: Coming in 3.0 (Part 2) - Filesystems |first=Thorsten |last=Leemhuis |work=The H Open |date=21 June 2011 |url=http://www.h-online.com/open/features/Kernel-Log-Coming-in-3-0-Part-2-Filesystems-1263681.html |access-date=8 November 2011 }} In June 2012, Mason left Oracle for Fusion-io, which he left a year later with Josef Bacik to join Facebook. While at both companies, Mason continued his work on Btrfs.{{cite web |url=http://www.itwire.com/business-it-news/open-source/62417-faecbook-lures-top-btrfs-hackers |title=iTWire |first=Sam |last=Varghese |website=ITWire.com |access-date=19 April 2015 }}
In 2012, two Linux distributions moved Btrfs from experimental to production or supported status: Oracle Linux in March,{{cite web |url=https://blogs.oracle.com/linux/unbreakable-enterprise-kernel-release-2-has-been-released |title=Unbreakable Enterprise Kernel Release 2 has been released |access-date=8 May 2019 }} followed by SUSE Linux Enterprise in August.{{cite web |url=http://www.novell.com/linux/releasenotes/x86_64/SUSE-SLES/11-SP2/#fate-306585 |title=SLES 11 SP2 Release Notes |date=21 August 2012 |access-date=29 August 2012 }}
In 2015, Btrfs was adopted as the default filesystem for SUSE Linux Enterprise Server (SLE) 12.{{cite web|url=https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-317221|title=SUSE Linux Enterprise Server 12 Release Notes|date=5 November 2015|access-date=20 January 2016}}
In August 2017, Red Hat announced in the release notes for Red Hat Enterprise Linux (RHEL) 7.4 that it no longer planned to move Btrfs to a fully supported feature (it's been included as a "technology preview" since RHEL 6 beta) noting that it would remain available in the RHEL 7 release series.{{cite web | url=https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.4_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functionality.html | title=Red Hat Enterprise Linux 7.4 Release Notes, Chapter 53: Deprecated Functionality | date=1 August 2017 | access-date=15 August 2017 | archive-url=https://web.archive.org/web/20170808013554/https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.4_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functionality.html|archive-date=8 August 2017|url-status=dead}} Btrfs was removed from RHEL 8 in May 2019.{{cite web |url=https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/file-systems-and-storage_considerations-in-adopting-rhel-8#btrfs-has-been-removed_file-systems-and-storage |title=Considerations in Adopting RHEL 8 |access-date=9 May 2019 |work=Product Documentation for Red Hat Enterprise Linux 8 |publisher=Red Hat }} RHEL moved from ext4 in RHEL 6 to XFS in RHEL 7.{{cite web |url=https://access.redhat.com/articles/3129891 |access-date=3 January 2022 |title=How to Choose Your Red Hat Enterprise Linux File System |date=4 September 2020 }}
In 2020, Btrfs was selected as the default file system for Fedora 33 for desktop variants.{{Cite web |date=24 August 2020 |title=Btrfs Coming to Fedora 33 |url=https://fedoramagazine.org/btrfs-coming-to-fedora-33/ |access-date=25 August 2020 |website=Fedora Magazine }}
Features
=List of features=
==Implemented==
As of version 6.0 of the Linux kernel, Btrfs implements the following features:{{cite web |title = Btrfs Wiki: Changelog |date=29 May 2019 |access-date=27 November 2013 |url=https://btrfs.wiki.kernel.org/index.php/Changelog |website=btrfs.wiki.kernel.org}}{{Cite web |title=Status — BTRFS documentation |url=https://btrfs.readthedocs.io/en/latest/Status.html |access-date=2025-01-12 |website=btrfs.readthedocs.io}}
- Mostly self-healing in some configurations due to the nature of copy-on-write
- Online defragmentation and an autodefrag mount option
- Online volume growth and shrinking
- Online block device addition and removal
- Online balancing (movement of objects between block devices to balance load)
- Offline filesystem check{{cite web |url=https://btrfs.readthedocs.io/en/latest/btrfs-check.html |title=Manpage btrfs-check |website=Btrfs.ReadTheDocs.io }}
- Online data scrubbing for finding errors and automatically fixing them for files with redundant copies
- RAID 0, RAID 1, and RAID 10{{cite web
| url = https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
| title = Using Btrfs with Multiple Devices
| date = 7 November 2013
| access-date = 20 November 2013
| website = kernel.org}}
- Subvolumes (one or more separately mountable filesystem roots within each disk partition)
- Transparent compression via zlib, LZO and (since 4.14) ZSTD, configurable per file or volume{{cite web
| url = https://btrfs.wiki.kernel.org/index.php/Compression
| title = Compression
| date = 25 June 2013
| access-date = 1 April 2014
| website = kernel.org}}{{cite web
| url = https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=63541927c8d11d2686778b1e8ec71c14b4fd53e4
| title = Btrfs: add support for inode properties
| date = 28 January 2014
| access-date = 1 April 2014
| website = kernel.org}}
- Atomic writable (via copy-on-write) or read-only{{cite web | url = https://lwn.net/Articles/417617/ | title= btrfs: Readonly snapshots | access-date = 12 December 2011 }} snapshots of subvolumes
- File cloning (reflink, copy-on-write) via
cp --reflink <source file> <destination file>
{{cite web | url = https://blogs.oracle.com/otn/save-disk-space-on-linux-by-cloning-files-on-btrfs-and-ocfs2 | title= Save disk space on Linux by cloning files on Btrfs and OCFS2 | access-date = 1 August 2017 }} - Checksums on data and metadata (CRC-32C{{cite web | url = http://btrfs.wiki.kernel.org/index.php/FAQ#What_checksum_function_does_Btrfs_use.3F | work = Btrfs wiki | title = Wiki FAQ: What checksum function does Btrfs use? | access-date = 15 June 2009 }}). New hash functions are implemented since 5.5:{{cite web | url = https://kdave.github.io/btrfs-hilights-5.5-new-hashes/ | title = Btrfs hilights in 5.5: new hashes | access-date = 29 August 2020 }} xxHash, SHA256, BLAKE2B.
- In-place conversion from ext3/4 to Btrfs (with rollback). This feature regressed around btrfs-progs version 4.0, rewritten from scratch in 4.6.{{cite web | url = https://www.spinics.net/lists/linux-btrfs/msg56040.html | title = Btrfs progs release 4.6 | access-date = 1 August 2017 }}
- Union mounting of read-only storage, known as file system seeding (read-only storage used as a copy-on-write backing for a writable Btrfs){{cite web | title = Btrfs changelog | first = Chris | last = Mason | date = 12 January 2009 | access-date = 12 February 2012 | url = http://btrfs.ipv5.de/index.php?title=Changelog#Seed_Device_support | archive-url = https://web.archive.org/web/20120229050222/http://btrfs.ipv5.de/index.php?title=Changelog#Seed_Device_support | archive-date = 29 February 2012 | url-status = dead }}
- Block discard (reclaims space on some virtualized setups and improves wear leveling on SSDs with TRIM)
- Send/receive (saving diffs between snapshots to a binary stream)
- Incremental backup{{cite web|title = Btrfs Wiki: Incremental Backup|date = 27 May 2013|access-date = 27 November 2013|url = https://btrfs.wiki.kernel.org/index.php/Incremental_Backup}}
- Out-of-band data deduplication (requires userspace tools)
- Ability to handle swap files and swap partitions
==Implemented but not recommended for production use==
- Hierarchical per-subvolume quotas
- RAID 5, RAID 6 (fail to guard against write holes){{cite web
| url = https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
| title = RAID 5/6
| date = 16 July 2016
| access-date =7 January 2025
|website=Btrfs.ReadTheDocs.io }}{{Cite web
|title=How to use btrfs raid5 successfully(ish)
|url=https://lore.kernel.org/linux-btrfs/20200627032414.GX10769@hungrycats.org/ |access-date=26 June 2022 |first=Zygo |last=Blaxell
|website=lore.kernel.org}}{{Cite web
|title=Current bugs with operational impact on btrfs raid5
| url=https://lore.kernel.org/linux-btrfs/20200627030614.GW10769@hungrycats.org/
|first=Zygo |last=Blaxell
|access-date=26 June 2022 |website=lore.kernel.org }}
=Cloning=
Btrfs provides a clone operation that atomically creates a copy-on-write snapshot of a file. Such cloned files are sometimes referred to as reflinks, in light of the proposed associated Linux kernel system call.{{cite web
| url = https://lwn.net/Articles/331808/
| title = The two sides of reflink()
| date = 5 May 2009
| access-date = 17 October 2013
| first = Jonathan | last = Corbet
|website=LWN.net }}
By cloning, the file system does not create a new link pointing to an existing inode; instead, it creates a new inode that initially shares the same disk blocks with the original file. As a result, cloning works only within the boundaries of the same Btrfs file system, but since version 3.6 of the Linux kernel it may cross the boundaries of subvolumes under certain circumstances.{{cite web
| url = https://github.com/torvalds/linux/commit/362a20c5e27614739c46707d1c5f55c214d164ce
| title = btrfs: allow cross-subvolume file clone
| access-date = 4 November 2013
| website = github.com}} The actual data blocks are not duplicated; at the same time, due to the copy-on-write (CoW) nature of Btrfs, modifications to any of the cloned files are not visible in the original file and vice versa.
Cloning should not be confused with hard links, which are directory entries that associate multiple file names with a single file. While hard links can be taken as different names for the same file, cloning in Btrfs provides independent files that initially share all their disk blocks.{{cite web
| url = http://www.pixelbeat.org/docs/unix_links.html
| title = Symlinks reference names, hardlinks reference meta-data and reflinks reference data
| date = 27 October 2010
| access-date = 17 October 2013
| website = pixelbeat.org}}
Support for this Btrfs feature was added in version 7.5 of the GNU coreutils, via the --reflink
option to the cp
command.{{cite web
| url = http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=NEWS;h=601d73251c49ce36b39f9838aa818c740cf3a10a;hb=af1996dde2d0089117a9e5e7aa543c6e55474b77
| title = GNU coreutils NEWS: Noteworthy changes in release 7.5
| first = Jim
| last = Meyering
| date = 20 August 2009
| access-date = 30 August 2009
| website = savannah.gnu.org}}{{cite web
| url = http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=commit;h=a1d7469835371ded0ad8e3496bc5a5bebf94ccef
| title = cp: accept the --reflink option
| first = Giuseppe
| last = Scrivano
| date = 1 August 2009
| access-date = 2 November 2009
| website = savannah.gnu.org}}
In addition to data cloning ({{tt|FICLONE}}), Btrfs also supports out-of-band deduplication via {{tt|FIDEDUPERANGE}}. This functionality allows two files with (even partially) identical data to share storage.{{man|2|ioctl_fideduperange|Linux}}
=Subvolumes and snapshots=
File:Btrfs subvolume list screenshot.png
File:Snapper root list screenshot.png
A Btrfs subvolume can be thought of as a separate POSIX file namespace, mountable separately by passing subvol
or subvolid
options to the {{man|8|mount|man.cx||inline}} utility. It can also be accessed by mounting the top-level subvolume, in which case subvolumes are visible and accessible as its subdirectories.
Subvolumes can be created at any place within the file system hierarchy, and they can also be nested. Nested subvolumes appear as subdirectories within their parent subvolumes, similarly to the way a top-level subvolume presents its subvolumes as subdirectories. Deleting a subvolume is not possible until all subvolumes below it in the nesting hierarchy are deleted; as a result, top-level subvolumes cannot be deleted.
Any Btrfs file system always has a default subvolume, which is initially set to be the top-level subvolume, and is mounted by default if no subvolume selection option is passed to mount
. The default subvolume can be changed as required.
A Btrfs snapshot is a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs' copy-on-write capabilities, and modifications to a snapshot are not visible in the original subvolume. Once a writable snapshot is made, it can be treated as an alternate version of the original file system. For example, to roll back to a snapshot, a modified original subvolume needs to be unmounted and the snapshot needs to be mounted in its place. At that point, the original subvolume may also be deleted.
The copy-on-write (CoW) nature of Btrfs means that snapshots are quickly created, while initially consuming very little disk space. Since a snapshot is a subvolume, creating nested snapshots is also possible. Taking snapshots of a subvolume is not a recursive process; thus, if a snapshot of a subvolume is created, every subvolume or snapshot that the subvolume already contains is mapped to an empty directory of the same name inside the snapshot.
Taking snapshots of a directory is not possible, as only subvolumes can have snapshots. However, there is a workaround that involves reflinks spread across subvolumes: a new subvolume is created, containing cross-subvolume reflinks to the content of the targeted directory. Having that available, a snapshot of this new volume can be created.
A subvolume in Btrfs is quite different from a traditional Logical Volume Manager (LVM) logical volume. With LVM, a logical volume is a separate block device, while a Btrfs subvolume is not and it cannot be treated or used that way. Making dd or LVM snapshots of btrfs leads to data loss if either the original or the copy is mounted while both are on the same computer.{{Cite web |url=https://btrfs.wiki.kernel.org/index.php/Gotchas#Block-level_copies_of_devices |title=Gotchas - btrfs Wiki |website=btrfs.wiki.kernel.org }}
=Send–receive=
Given any pair of subvolumes (or snapshots), Btrfs can generate a binary diff between them (by using the btrfs send
command) that can be replayed later (by using btrfs receive
), possibly on a different Btrfs file system. The send–receive feature effectively creates (and applies) a set of data modifications required for converting one subvolume into another.
The send/receive feature can be used with regularly scheduled snapshots for implementing a simple form of file system replication, or for the purpose of performing incremental backups.
=Quota groups=
File:Btrfs qgroup screenshot.png
A quota group (or qgroup) imposes an upper limit to the space a subvolume or snapshot may consume. A new snapshot initially consumes no quota because its data is shared with its parent, but thereafter incurs a charge for new files and copy-on-write operations on existing files. When quotas are active, a quota group is automatically created with each new subvolume or snapshot. These initial quota groups are building blocks which can be grouped (with the btrfs qgroup
command) into hierarchies to implement quota pools.
Quota groups only apply to subvolumes and snapshots, while having quotas enforced on individual subdirectories, users, or user groups is not possible. However, workarounds are possible by using different subvolumes for all users or user groups that require a quota to be enforced.
=In-place conversion from ext2/3/4 and ReiserFS=
As the result of having very little metadata anchored in fixed locations, Btrfs can warp to fit unusual spatial layouts of the backend storage devices. The btrfs-convert
tool exploits this ability to do an in-place conversion of an ext2/3/4 or ReiserFS file system, by nesting the equivalent Btrfs metadata in its unallocated space—while preserving an unmodified copy of the original file system.
The conversion involves creating a copy of the whole ext2/3/4 metadata, while the Btrfs files simply point to the same blocks used by the ext2/3/4 files. This makes the bulk of the blocks shared between the two filesystems before the conversion becomes permanent. Thanks to the copy-on-write nature of Btrfs, the original versions of the file data blocks are preserved during all file modifications. Until the conversion becomes permanent, only the blocks that were marked as free in ext2/3/4 are used to hold new Btrfs modifications, meaning that the conversion can be undone at any time (although doing so will erase any changes made after the conversion to Btrfs).
All converted files are available and writable in the default subvolume of the Btrfs. A sparse file holding all of the references to the original ext2/3/4 filesystem is created in a separate subvolume, which is mountable on its own as a read-only disk image, allowing both original and converted file systems to be accessed at the same time. Deleting this sparse file frees up the space and makes the conversion permanent.
In 4.x versions of the mainline Linux kernel, the in-place ext3/4 conversion was considered untested and rarely used. However, the feature was rewritten from scratch in 2016 for btrfs-progs
4.6. and has been considered stable since then.
In-place conversion from ReiserFS was introduced in September 2017 with kernel 4.13.{{cite web | url = https://btrfs.readthedocs.io/en/latest/btrfs-convert.html | title = btrfs-convert(8) — BTRFS Documentation |website=Btrfs.ReadTheDocs.io |access-date=16 October 2022 }}
=Union mounting / seed devices=
When creating a new Btrfs, an existing Btrfs can be used as a read-only "seed" file system.{{cite web | url=https://btrfs.wiki.kernel.org/index.php/Seed-device | title=Seed device | access-date=1 August 2017 | archive-date=12 June 2017 | archive-url=https://web.archive.org/web/20170612105214/https://btrfs.wiki.kernel.org/index.php/Seed-device | url-status=dead }} The new file system will then act as a copy-on-write overlay on the seed, as a form of union mounting. The seed can be later detached from the Btrfs, at which point the rebalancer will simply copy over any seed data still referenced by the new file system before detaching. Mason has suggested this may be useful for a Live CD installer, which might boot from a read-only Btrfs seed on an optical disc, rebalance itself to the target partition on the install disk in the background while the user continues to work, then eject the disc to complete the installation without rebooting.
=Encryption=
In his 2009 interview, Mason stated that support for encryption was planned for Btrfs. In the meantime, a workaround for combining encryption with Btrfs is to use a full-disk encryption mechanism such as dm-crypt / LUKS on the underlying devices and to create the Btrfs filesystem on top of that layer.
{{As of|2020|post=,}} the developers were working to add keyed hash like HMAC (SHA256).{{cite web |last=Sterba |first=David |title=authenticated file systems using HMAC(SHA256) |url=https://lore.kernel.org/linux-btrfs/3ca669b7-7447-5793-f231-32d5417bd8ee@suse.com/T/#m949c14afbe4485faf61bd6a568abfe21163bf5bd |website=Lore.Kernel.org |access-date=25 April 2020 }}
=Checking and recovery=
Unix systems traditionally rely on "fsck" programs to check and repair filesystems. This functionality is implemented via the btrfs check
program. Since version 4.0 this functionality is deemed relatively stable. However, as of December 2022, the btrfs documentation suggests that its --repair
option be used only if you have been advised by "a developer or an experienced user". As of August 2022, the SLE documentation recommends using a Live CD, performing a backup and only using the repair option as a last resort.{{Cite web |title=How to recover from BTRFS errors {{!}} Support {{!}} SUSE |url=https://www.suse.com/support/kb/doc/?id=000018769 |access-date=28 January 2023 |website=www.suse.com}}
There is another tool, named btrfs-restore
, that can be used to recover files from an unmountable filesystem, without modifying the broken filesystem itself (i.e., non-destructively).{{Cite web|url=https://btrfs.wiki.kernel.org/index.php/Restore|title=Restore - btrfs Wiki|website=btrfs.wiki.kernel.org}}{{Cite web |title=btrfs-restore(8) - Linux manual page |url=https://man7.org/linux/man-pages/man8/btrfs-restore.8.html |access-date=28 January 2023 |website=man7.org}}
In normal use, Btrfs is mostly self-healing and can recover from broken root trees at mount time, thanks to making periodic data flushes to permanent storage, by default every 30 seconds. Thus, isolated errors will cause a maximum of 30 seconds of filesystem changes to be lost at the next mount.{{cite web|url=https://btrfs.wiki.kernel.org/index.php/Problem_FAQ |title=Problem FAQ - btrfs Wiki |website=kernel.org |date=31 July 2013 |access-date=16 January 2014}} This period can be changed by specifying a desired value (in seconds) with the commit
mount option.{{cite web
| url = https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=906c176e541f89ed3c04d0e9af1c7cf7b3cc1adb
| title = kernel/git/torvalds/linux.git: Documentation: filesystems: add new btrfs mount options (Linux kernel source tree)
| date = 21 November 2013
| access-date = 6 February 2014
|website=kernel.org }}{{cite web
| url = https://btrfs.readthedocs.io/en/latest/btrfs-man5.html
| title = Mount options - btrfs Wiki
| website =Btrfs.ReadTheDocs.io
| date = 12 November 2013
| access-date = 16 January 2014}}
Design
Ohad Rodeh's original proposal at USENIX 2007 noted that B+ trees, which are widely used as on-disk data structures for databases, could not efficiently allow copy-on-write-based snapshots because its leaf nodes were linked together: if a leaf was copied on write, its siblings and parents would have to be as well, as would their siblings and parents and so on until the entire tree was copied. He suggested instead a modified B-tree (which has no leaf linkage), with a refcount associated to each tree node but stored in an ad hoc free map structure and certain relaxations to the tree's balancing algorithms to make them copy-on-write friendly. The result would be a data structure suitable for a high-performance object store that could perform copy-on-write snapshots, while maintaining good concurrency.
At Oracle later that year, Mason began work on a snapshot-capable file system that would use this data structure almost exclusively—not just for metadata and file data, but also recursively to track space allocation of the trees themselves. This allowed all traversal and modifications to be funneled through a single code path, against which features such as copy on write, checksumming and mirroring needed to be implemented only once to benefit the entire file system.
Btrfs is structured as several layers of such trees, all using the same B-tree implementation. The trees store generic items sorted by a 136-bit key. The most significant 64 bits of the key are a unique object id. The middle eight bits are an item type field: its use is hardwired into code as an item filter in tree lookups. Objects can have multiple items of multiple types. The remaining (least significant) 64 bits are used in type-specific ways. Therefore, items for the same object end up adjacent to each other in the tree, grouped by type. By choosing certain key values, objects can further put items of the same type in a particular order.
Interior tree nodes are simply flat lists of key-pointer pairs, where the pointer is the logical block number of a child node. Leaf nodes contain item keys packed into the front of the node and item data packed into the end, with the two growing toward each other as the leaf fills up.
=File system tree=
Within each directory, directory entries appear as directory items, whose least significant bits of key values are a CRC32C hash of their filename. Their data is a location key, or the key of the inode item it points to. Directory items together can thus act as an index for path-to-inode lookups, but are not used for iteration because they are sorted by their hash, effectively randomly permuting them. This means user applications iterating over and opening files in a large directory would thus generate many more disk seeks between non-adjacent files—a notable performance drain in other file systems with hash-ordered directories such as ReiserFS,{{cite web|url = http://lkml.indiana.edu/hypermail/linux/kernel/0112.0/2019.html|title = Re: Ext2 directory index: ALS paper and benchmarks|work = ReiserFS developers mailing list|first = Hans|last = Reiser|date = 7 December 2001|access-date = 28 August 2009}} ext3 (with Htree-indexes enabled{{cite web |url = http://oss.oracle.com/~mason/acp/ |first = Chris |last = Mason |title = Acp |work = Oracle personal web page |access-date = 5 November 2011 |archive-date = 16 May 2021 |archive-url = https://web.archive.org/web/20210516204043/https://oss.oracle.com/~mason/acp/ |url-status = dead }}) and ext4, all of which have TEA-hashed filenames. To avoid this, each directory entry has a directory index item, whose key value of the item is set to a per-directory counter that increments with each new directory entry. Iteration over these index items thus returns entries in roughly the same order as stored on disk.
Files with hard links in multiple directories have multiple reference items, one for each parent directory. Files with multiple hard links in the same directory pack all of the links' filenames into the same reference item. This was a design flaw that limited the number of same-directory hard links to however many could fit in a single tree block. (On the default block size of 4 KiB, an average filename length of 8 bytes and a per-filename header of 4 bytes, this would be less than 350.) Applications which made heavy use of multiple same-directory hard links, such as git, GNUS, GMame and BackupPC were observed to fail at this limit. The limit was eventually removed{{cite web|title = btrfs: extended inode refs|first = Mark|last = Fasheh|date = 9 October 2012|access-date = 7 November 2012|url = https://git.kernel.org/?p=linux/kernel/git/mason/linux-btrfs.git;a=commit;h=f186373fef005cee948a4a39e6a14c2e5f517298|archive-url = https://archive.today/20130415062145/http://git.kernel.org/?p=linux/kernel/git/mason/linux-btrfs.git;a=commit;h=f186373fef005cee948a4a39e6a14c2e5f517298|url-status = dead|archive-date = 15 April 2013}} (and as of October 2012 has been merged{{cite web|title = Pull btrfs update from Chris Mason|first = Linus|last = Torvalds|date = 10 October 2012|access-date = 7 November 2012|url = https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=72055425e53540d9d0e59a57ac8c9b8ce77b62d5|archive-url = https://archive.today/20130415043758/http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=72055425e53540d9d0e59a57ac8c9b8ce77b62d5|url-status = dead|archive-date = 15 April 2013|work = git.kernel.org}} pending release in Linux 3.7) by introducing spillover extended reference items to hold hard link filenames which do not otherwise fit.
==Extents==
{{More citations needed section|date=January 2017}}
File data is kept outside the tree in extents, which are contiguous runs of disk data blocks. Extent blocks default to 4 KiB in size, do not have headers and contain only (possibly compressed) file data. In compressed extents, individual blocks are not compressed separately; rather, the compression stream spans the entire extent.
Files have extent data items to track the extents which hold their contents. The item's key value is the starting byte offset of the extent. This makes for efficient seeks in large files with many extents, because the correct extent for any given file offset can be computed with just one tree lookup.
Snapshots and cloned files share extents. When a small part of a large such extent is overwritten, the resulting copy-on-write may create three new extents: a small one containing the overwritten data, and two large ones with unmodified data on either side of the overwrite. To avoid having to re-write unmodified data, the copy-on-write may instead create bookend extents, or extents which are simply slices of existing extents. Extent data items allow for this by including an offset into the extent they are tracking: items for bookends are those with non-zero offsets.
=Extent allocation tree=
{{More citations needed section|date=January 2017}}
The extent allocation tree acts as an allocation map for the file system. Unlike other trees, items in this tree do not have object ids. They represent regions of space: their key values hold the starting offsets and lengths of the regions they represent.
The file system divides its allocated space into block groups which are variable-sized allocation regions that alternate between preferring metadata extents (tree nodes) and data extents (file contents). The default ratio of data to metadata block groups is 1:2. They are intended to use concepts of the Orlov block allocator to allocate related files together and resist fragmentation by leaving free space between groups. (Ext3 block groups, however, have fixed locations computed from the size of the file system, whereas those in Btrfs are dynamic and created as needed.) Each block group is associated with a block group item. Inode items in the file system tree include a reference to their current block group.
Extent items contain a back-reference to the tree node or file occupying that extent. There may be multiple back-references if the extent is shared between snapshots. If there are too many back-references to fit in the item, they spill out into individual extent data reference items. Tree nodes, in turn, have back-references to their containing trees. This makes it possible to find which extents or tree nodes are in any region of space by doing a B-tree range lookup on a pair of offsets bracketing that region, then following the back-references. For relocating data, this allows an efficient upwards traversal from the relocated blocks to quickly find and fix all downwards references to those blocks, without having to scan the entire file system. This, in turn, allows the file system to efficiently shrink, migrate, and defragment its storage online.
The extent allocation tree, as with all other trees in the file system, is copy-on-write. Writes to the file system may thus cause a cascade whereby changed tree nodes and file data result in new extents being allocated, causing the extent tree itself to change. To avoid creating a feedback loop, extent tree nodes which are still in memory but not yet committed to disk may be updated in place to reflect new copied-on-write extents.
In theory, the extent allocation tree makes a conventional free-space bitmap unnecessary because the extent allocation tree acts as a B-tree version of a BSP tree. In practice, however, an in-memory red–black tree of page-sized bitmaps is used to speed up allocations. These bitmaps are persisted to disk (starting in Linux 2.6.37, via the space_cache
mount option{{cite web |title=Benchmarks of the Btrfs Space Cache Option |access-date=16 November 2012 |date=24 December 2010 |first=Michael |last=Larabel |url=https://www.phoronix.com/scan.php?page=article&item=btrfs_space_cache&num=1 |publisher=Phoronix }}) as special extents that are exempt from checksumming and copy-on-write.
=Checksum tree and scrubbing=
CRC-32C checksums are computed for both data and metadata and stored as checksum items in a checksum tree. There is room for 256 bits of metadata checksums and up to a full node (roughly 4 KB or more) for data checksums. Btrfs has provisions for additional checksum algorithms to be added in future versions of the file system.{{cite web
| url = https://btrfs.wiki.kernel.org/index.php/FAQ#What_checksum_function_does_Btrfs_use.3F
| title = FAQ - btrfs Wiki: What checksum function does Btrfs use?
| access-date = 22 November 2020
| publisher = The btrfs Project
}}
There is one checksum item per contiguous run of allocated blocks, with per-block checksums packed end-to-end into the item data. If there are more checksums than can fit, they spill into another checksum item in a new leaf. If the file system detects a checksum mismatch while reading a block, it first tries to obtain (or create) a good copy of this block from another device{{spaced ndash}} if internal mirroring or RAID techniques are in use.{{cite web |last=Salter |first=Jim |title=Bitrot and Atomic COWs: Inside "Next-Gen" Filesystems |url=https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/ |website=Ars Technica |access-date=15 January 2014 |date=15 January 2014}}
Btrfs can initiate an online check of the entire file system by triggering a file system scrub job that is performed in the background. The scrub job scans the entire file system for integrity and automatically attempts to report and repair any bad blocks it finds along the way.{{cite web
| url = https://blogs.oracle.com/wim/entry/btrfs_scrub_go_fix_corruptions
| title = Btrfs Scrub – Go Fix Corruptions with Mirror Copies Please!
| date = 28 September 2011
| access-date = 20 September 2013
| first = Wim
| last = Coekaerts
|website=Oracle }}
{{See also|Silent data corruption}}
=Log tree=
An fsync request commits modified data immediately to stable storage. fsync-heavy workloads (like a database or a virtual machine whose running OS fsyncs frequently) could potentially generate a great deal of redundant write I/O by forcing the file system to repeatedly copy-on-write and flush frequently modified parts of trees to storage. To avoid this, a temporary per-subvolume log tree is created to journal fsync-triggered copies on write. Log trees are self-contained, tracking their own extents and keeping their own checksum items. Their items are replayed and deleted at the next full tree commit or (if there was a system crash) at the next remount.
=Chunk and device trees=
{{More citations needed section|date=December 2020}}
Block devices are divided into physical chunks of 1 GiB for data and 256 MiB for metadata.{{cite web |title=Glossary |url=https://btrfs.wiki.kernel.org/index.php/Glossary |website=Btrfs Wiki |access-date=31 July 2021 |archive-date=31 July 2021 |archive-url=https://web.archive.org/web/20210731190751/https://btrfs.wiki.kernel.org/index.php/Glossary |url-status=dead }} Physical chunks across multiple devices can be mirrored or striped together into a single logical chunk. These logical chunks are combined into a single logical address space that the rest of the filesystem uses.
The chunk tree tracks this by storing each device therein as a device item and logical chunks as chunk map items, which provide a forward mapping from logical to physical addresses by storing their offsets in the least significant 64 bits of their key. Chunk map items can be one of several different types:
; single : 1 logical to 1 physical chunk
; dup : 1 logical chunk to 2 physical chunks on 1 block device
; raid0 : N logical chunks to N≥2 physical chunks across N≥2 block devices
; raid1 : 1 logical chunk to 2 physical chunks across 2 out of N≥2 block devices,{{Cite web |url=https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs#PROFILES |title=Manpage/mkfs.btrfs |at=Profiles |website=Btrfs Wiki |access-date=31 July 2021 }} in contrast to conventional RAID 1 which has N physical chunks
; raid1c3 : 1 logical chunk to 3 physical chunks out of N≥3 block devices
; raid1c4 : 1 logical chunk to 4 physical chunks out of N≥4 block devices
; raid5 : N (for N≥2) logical chunks to N+1 physical chunks across N+1 block devices, with 1 physical chunk used as parity
; raid6 : N (for N≥2) logical chunks to N+2 physical chunks across N+2 block devices, with 2 physical chunks used as parity
N is the number of block devices still having free space when the chunk is allocated. If N is not large enough for the chosen mirroring/mapping, then the filesystem is effectively out of space.
=Relocation trees=
Defragmentation, shrinking, and rebalancing operations require extents to be relocated. However, doing a simple copy-on-write of the relocating extent will break sharing between snapshots and consume disk space. To preserve sharing, an update-and-swap algorithm is used, with a special relocation tree serving as scratch space for affected metadata. The extent to be relocated is first copied to its destination. Then, by following backreferences upward through the affected subvolume's file system tree, metadata pointing to the old extent is progressively updated to point at the new one; any newly updated items are stored in the relocation tree. Once the update is complete, items in the relocation tree are swapped with their counterparts in the affected subvolume, and the relocation tree is discarded.{{cite web |title = BTRFS: The Linux B-tree Filesystem|date = 9 July 2012|first1 = Chris|last1 = Mason|first2 = Ohad|last2 = Rodeh|first3 = Josef|last3 = Bacik |publisher=IBM Research |url=http://domino.watson.ibm.com/library/CyberDig.nsf/papers/6E1C5B6A1B6EDD9885257A38006B6130/$File/rj10501.pdf |archive-url=https://web.archive.org/web/20140423000340/http://domino.watson.ibm.com/library/CyberDig.nsf/papers/6E1C5B6A1B6EDD9885257A38006B6130/$File/rj10501.pdf |archive-date=23 April 2014 }}
=Superblock=
All the file system's trees—including the chunk tree itself—are stored in chunks, creating a potential bootstrapping problem when mounting the file system. To bootstrap into a mount, a list of physical addresses of chunks belonging to the chunk and root trees are stored in the superblock.{{cite web | url = http://btrfs.wiki.kernel.org/index.php/Multiple_Device_Support |url-status= dead |first=Chris |last=Mason | website=Btrfs wiki | access-date = 5 November 2011 | date = 30 April 2008 |title = Multiple device support | archive-date= 20 July 2011 |archive-url= https://web.archive.org/web/20110720220543/https://btrfs.wiki.kernel.org/index.php/Multiple_Device_Support }}
Superblock mirrors are kept at fixed locations:{{cite mailing list|url = http://kerneltrap.org/mailarchive/linux-btrfs/2010/4/20/6884623|title = Re: Restoring BTRFS partition|mailing-list = linux-btrfs|date = 20 April 2010|last = Bartell|first = Sean}} 64 KiB into every block device, with additional copies at 64 MiB, 256 GiB and 1 PiB. When a superblock mirror is updated, its generation number is incremented. At mount time, the copy with the highest generation number is used. All superblock mirrors are updated in tandem, except in SSD mode which alternates updates among mirrors to provide some wear levelling.
Commercial support
=Supported=
{{Related articles|List of default file systems}}
- Oracle Linux from version 7{{Cite web |url=https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-RAID56-UEK |title=Oracle Now Supports Btrfs RAID5/6 on Their Unbreakable Enterprise Kernel - Phoronix |website=Phoronix.com }}
- SUSE Linux Enterprise Server from version 12{{Cite web |url=https://lwn.net/Articles/731848/ |title=SUSE Reaffirms Support for Btrfs |website=LWN.net }}{{Cite web |url=https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/ |title=SUSE Linux Enterprise Server 12 Release Notes |website=SUSE.com |access-date=28 February 2021 }}
- Synology DiskStation Manager (DSM) from version 6.0{{Cite web |url=https://global.download.synology.com/download/Document/Software/WhitePaper/Package/CloudStation/All/enu/Synology_Cloud_Station_White_Paper-Based_on_DSM_6.0.pdf |title=Cloud Station White Paper |website=Synology.com |publisher=Synology |quote=Starting from DSM 6.0, data volumes can be formatted as Btrfs |page=11 |archive-date=11 November 2020 |archive-url=https://web.archive.org/web/20201111190843/https://global.download.synology.com/download/Document/Software/WhitePaper/Package/CloudStation/All/enu/Synology_Cloud_Station_White_Paper-Based_on_DSM_6.0.pdf }}
=No longer supported=
- Btrfs was included as a "technology preview" in Red Hat Enterprise Linux 6 and 7; it was removed in RHEL 8 in 2018.{{Cite web |url=https://news.ycombinator.com/item?id=14907771 |title=Btrfs Has Been Deprecated in RHEL |website=News.YCombinator.com }}{{Cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Red-Hat-Deprecates-Btrfs-Again|title=Red Hat Appears to Be Abandoning Their Btrfs Hopes - Phoronix|website=Phoronix.com}}
See also
- APFS – a copy-on-write file system for macOS, iPadOS, iOS, tvOS and watchOS
- Bcachefs
- Comparison of file systems
- HAMMER – DragonFly BSD's file system that uses B-trees, paired with checksums as a countermeasure for data corruption
- List of file systems
- ReFS – a copy-on-write file system for Windows Server 2012
- ZFS
Notes
{{Notelist|refs=
{{Efn|name="kernel-limits"|This is the Btrfs' own on-disk size limit. The limit is reduced down to 8 EiB on 64-bit systems and 2 EiB on 32-bit systems due to Linux kernel's internal limits, unless kernel's CONFIG_LBD
configuration option (available since the 2.6.x kernel series) is enabled to remove these kernel limits.{{cite web
|url = http://users.suse.com/~aj/linux_lfs.html
|title = Large File Support in Linux
|date = 15 February 2005
|access-date = 12 August 2015
|first=Andreas |last=Jaeger
|website = users.suse.com
|archive-date = 23 July 2015
|archive-url = https://web.archive.org/web/20150723102830/http://users.suse.com/~aj/linux_lfs.html
|url-status = dead
| url = http://kernel.xc.net/html/linux-2.6.29/x86/LBD
| title = Linux kernel configuration help for CONFIG_LBD in 2.6.29 on x86
| access-date = 12 August 2015
| website = kernel.xc.net
| archive-url = https://web.archive.org/web/20150906090823/http://kernel.xc.net/html/linux-2.6.29/x86/LBD
| archive-date = 6 September 2015
| url-status = dead}}}}
{{Efn|name="maximum-files"|Every item in Btrfs has a 64-bit identifier, which means the most files one can have on a Btrfs filesystem is 264.}}
}}
References
{{Reflist|refs=
| url = https://www.suse.com/documentation/sles11/stor_admin/data/sec_filesystems_lfs.html
| title = Suse Documentation: Storage Administration Guide – Large File Support in Linux
| access-date = 12 August 2015
|publisher=SUSE }}
{{cite web |last=McPherson |first=Amanda |date=22 June 2009 |title=A Conversation with Chris Mason on BTRfs: the next generation file system for Linux |url=https://www.linuxfoundation.org/blog/blog/a-conversation-with-chris-mason-on-btrfs |publisher=Linux Foundation |archive-url=https://web.archive.org/web/20120627065427/http://www.linuxfoundation.org/news-media/blogs/browse/2009/06/conversation-chris-mason-btrfs-next-generation-file-system-linux |archive-date=27 June 2012 |url-status=live |access-date=7 January 2025 }}
| url = http://kernelnewbies.org/Linux_3.0#head-3e596e03408e1d32a7cc381d6f54e87feee22ee4
| title = Linux kernel 3.0, Section 1.1. Btrfs: Automatic defragmentation, scrubbing, performance improvements
| date = 21 July 2011
| access-date = 5 April 2016
| website = kernelnewbies.org}}
{{cite web|url = https://lwn.net/Articles/506244/|first = Jonathan|last = Corbet|title = Btrfs send/receive |website=LWN.net |date = 11 July 2012|access-date = 14 November 2012}}
{{cite web |url = http://sensille.com/qgroups.pdf|title = Btrfs Subvolume Quota Groups|first = Arne|last = Jansen|date=2011 |publisher=Strato AG |access-date = 14 November 2012}}
{{cite web | url = https://lwn.net/Articles/342892/ | title = A short history of btrfs |website=LWN.net | first= Valerie |last=Aurora | date = 22 July 2009 | access-date = 5 November 2011 }}
| url = https://btrfs.wiki.kernel.org/index.php/UseCases
| title = UseCases – btrfs documentation
| access-date = 4 November 2013
| website = kernel.org}}
|url = https://blogs.oracle.com/OTNGarage/entry/save_disk_space_on_linux
|title = Save disk space on Linux by cloning files on Btrfs and OCFS2
|date = 31 August 2011
|access-date = 17 October 2013
|first=Lenz |last=Grimmer
|website = oracle.com
|archive-date = 18 October 2013
|archive-url = https://web.archive.org/web/20131018001449/https://blogs.oracle.com/OTNGarage/entry/save_disk_space_on_linux
|url-status = dead
}}
| url = http://docs.oracle.com/cd/E37670_01/E37355/html/ol_use_case3_btrfs.html
| title = 5.6 Creating Subvolumes and Snapshots [needs update]
|date= 2013
| access-date = 31 October 2013
| website = oracle.com}}
| url = https://btrfs.wiki.kernel.org/index.php/SysadminGuide
| title = SysadminGuide – Btrfs documentation
| access-date = 31 October 2013
| website = kernel.org}}
| url = http://docs.oracle.com/cd/E37670_01/E37355/html/ol_sendrecv_btrfs.html
| title = 5.7 Using the Send/Receive Feature
| year = 2013
| access-date = 31 October 2013
| website = oracle.com}}
| url = https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3
| title = Conversion from Ext3 (Btrfs documentation)
| first = Chris
| last = Mason
| date = 25 June 2015
| access-date = 22 April 2016
| website = kernel.org}}
{{cite web |url=http://linuxfoundation.ubicast.tv/videos/permalink/123/ |title = Btrfs Filesystem: Status and New Features |publisher=Linux Foundation |date=5 April 2012|access-date=16 November 2012|first=Chris|last=Mason }}{{Dead link|date=November 2018 |bot=InternetArchiveBot |fix-attempted=yes }}
{{Cite conference |last=Rodeh |first=Ohad |title=B-trees, shadowing, and clones |conference=USENIX Linux Storage & Filesystem Workshop |year=2007 |url=https://www.usenix.org/legacy/events/lsf07/tech/rodeh.pdf}} Also {{Cite journal |last=Rodeh |first=Ohad |year=2008 |title=B-trees, shadowing, and clones |journal=ACM Transactions on Storage|volume=3 |issue=4 |pages=1–27 |doi=10.1145/1326542.1326544 |s2cid=207166167 }}
| url = http://www.oracle.com/technetwork/articles/servers-storage-admin/advanced-btrfs-1734952.html
| title = How I Use the Advanced Capabilities of Btrfs
| date = August 2012
| access-date = 20 September 2013
| first1 = Margaret
| last1 = Bierman
| first2 = Lenz
| last2 = Grimmer}}
}}
External links
- {{Official website}}
- {{YouTube|id=hxWuaozpe2I|title=I Can't Believe This is Butter! A tour of btrfs}}{{snd}} a conference presentation by Avi Miller, an Oracle engineer
- [https://lwn.net/Articles/577961/ Btrfs: Working with multiple devices]{{snd}} LWN.net, December 2013, by Jonathan Corbet
- [http://marc.merlins.org/perso/btrfs/ Marc's Linux Btrfs posts]{{snd}} detailed insights into various Btrfs features
- [http://marc.merlins.org/linux/talks/Btrfs-LC2014-JP/Btrfs.pdf Btrfs overview], LinuxCon 2014, by Marc Merlin
- [https://github.com/maharmstone/btrfs WinBtrfs] Btrfs Driver For Windows
{{Filesystem}}
{{Linux kernel}}
{{Linux}}
{{Portal bar|Free and open-source software|Linux}}
Category:Compression file systems
Category:File systems supported by the Linux kernel