Logical Volume Manager (Linux)

{{Short description|Logical volume management system}}

{{Multiple issues|

{{Cleanup rewrite|date=April 2013}}

{{Lead too short|date=July 2015}}

}}

{{Infobox software

| name = LVM

| title = Logical Volume Manager

| logo =

| logo caption =

| logo_size =

| logo_alt =

| screenshot =

| caption =

| screenshot_size =

| screenshot_alt =

| collapsible =

| author = Heinz Mauelshagen{{cite web|url=https://ftp.gwdg.de/pub/linux/misc/lvm/1.0/README|title=LVM README |date=2003-11-17 |access-date=2014-06-25}}

| developer =

| released =

| discontinued =

| latest release version = {{wikidata|property|reference|edit|P348}}

| latest release date = {{Start date and age|{{wikidata|qualifier|P348|P577}}|df=yes}}

| latest preview version =

| latest preview date =

| repo = {{URL|1=https://sourceware.org/git/?p=lvm2.git}}

| status =

| programming language = C

| operating system = Linux, NetBSD

| platform =

| size =

| language =

| language count =

| language footnote =

| genre =

| license = GPLv2

| website = {{URL|https://sourceware.org/lvm2/}}

}}

In Linux, Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume.{{cite web |url=https://www.suse.com/documentation/sles10/book_sle_reference/data/sec_yast2_system_lvm.html |title=7.1.2 LVM Configuration with YaST |publisher=SUSE |date=12 July 2011 |access-date=2015-05-22 |archive-url=https://web.archive.org/web/20150725212932/https://www.suse.com/documentation/sles10/book_sle_reference/data/sec_yast2_system_lvm.html |archive-date=25 July 2015 |url-status=dead }}{{cite web |url=https://help.ubuntu.com/community/UbuntuDesktopLVM |title=HowTo: Set up Ubuntu Desktop with LVM Partitions |publisher=Ubuntu |date=1 June 2014 |access-date=2015-05-22 |archive-url=https://web.archive.org/web/20160304023849/https://help.ubuntu.com/community/UbuntuDesktopLVM |archive-date=4 March 2016 |url-status=dead }}{{cite web|url=https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/Create_LVM-x86.html|title=9.15.4 Create LVM Logical Volume |publisher=Red Hat |date=8 October 2014 |access-date=2015-05-22}}

Heinz Mauelshagen wrote the original LVM code in 1998, when he was working at Sistina Software, taking its primary design guidelines from the HP-UX's volume manager.

Uses

LVM is used for the following purposes:

  • Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.
  • Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping.
  • On small systems (like a desktop), instead of having to estimate at installation time how big a partition might need to be, LVM allows filesystems to be easily resized as needed.
  • Performing consistent backups by taking snapshots of the logical volumes.
  • Encrypting multiple physical partitions with one password.

LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, repartitioning and backup.

{{Anchor|WRITE-MOSTLY}}Features

= Basic functionality =

  • Volume groups (VGs) can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones.
  • Logical volumes (LVs) can be resized online by concatenating extents onto them or truncating extents from them.
  • LVs can be moved between PVs.
  • Creation of read-only snapshots of logical volumes (LVM1), leveraging a copy on write (CoW) feature,{{Cite web|url=https://blog.pythian.com/btrfs-performance-compared-lvmext4-regards-database-workloads/|title=BTRFS performance compared to LVM+EXT4 with regards to database workloads|date=29 May 2018}} or read/write snapshots (LVM2)
  • VGs can be split or merged in situ as long as no LVs span the split. This can be useful when migrating whole LVs to or from offline storage.
  • LVM objects can be tagged for administrative convenience.{{cite web |url = https://www.suse.com/documentation/sles11/stor_admin/data/lvmtagging.html| title = Tagging LVM2 Storage Objects| publisher=Micro Focus International |access-date=21 May 2015}}
  • VGs and LVs can be made active as the underlying devices become available through use of the lvmetad daemon.{{cite web |url = https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/metadatadaemon.html| title = The Metadata Daemon| publisher=Red Hat Inc |access-date=22 May 2015}}

= Advanced functionality =

  • Hybrid volumes can be created using the dm-cache target, which allows one or more fast storage devices, such as flash-based SSDs, to act as a cache for one or more slower hard disk drives.{{cite web|url=https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/ |title=Using LVM's new cache feature |date=22 May 2014 |access-date=2014-07-11}}
  • Thinly provisioned LVs can be allocated from a pool.{{cite web|url=https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html |title=2.3.5. Thinly-Provisioned Logical Volumes (Thin Volumes) |publisher=Access.redhat.com |access-date=2014-06-20}}
  • On newer versions of device mapper, LVM is integrated with the rest of device mapper enough to ignore the individual paths that back a dm-multipath device if devices/multipath_component_detection=1 is set in lvm.conf. This prevents LVM from activating volumes on an individual path instead of the multipath device.{{cite web|url=https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/5.8_Technical_Notes/lvm2.html |title=4.101.3. RHBA-2012:0161 — lvm2 bug fix and enhancement update |access-date=2014-06-08}}

= RAID =

  • LVs can be created to include RAID functionality, including RAID 1, 5 and 6.{{cite web|url=https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LV.html#raid_volumes |title=5.4.16. RAID Logical Volumes |publisher=Access.redhat.com |access-date=2017-02-07}}
  • Entire LVs or their parts can be striped across multiple PVs, similarly to RAID 0.
  • A RAID 1 backend device (a PV) can be configured as "write-mostly", resulting in reads being avoided to such devices unless necessary.{{cite web |url = https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid1-writebehind.html| title = Controlling I/O Operations on a RAID1 Logical Volume| publisher=redhat.com |access-date=16 June 2014}}
  • Recovery rate can be limited using lvchange --raidmaxrecoveryrate and lvchange --raidminrecoveryrate to maintain acceptable I/O performance while rebuilding a LV that includes RAID functionality.

= High availability =

The LVM also works in a shared-storage cluster in which disks holding the PVs are shared between multiple host computers, but can require an additional daemon to mediate metadata access via a form of locking.

; CLVM

: A distributed lock manager is used to broker concurrent LVM metadata accesses. Whenever a cluster node needs to modify the LVM metadata, it must secure permission from its local clvmd, which is in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects.

; HA-LVM

: Cluster-awareness is left to the application providing the high availability function. For the LVM's part, HA-LVM can use CLVM as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects that have appropriate tags. Since this simpler solution avoids contention rather than mitigating it, no concurrent accesses are allowed, so HA-LVM is considered useful only in active-passive configurations.

; lvmlockd

: {{As of|2017}}, a stable LVM component that is designed to replace clvmd by making the locking of LVM objects transparent to the rest of LVM, without relying on a distributed lock manager.{{cite web|url=https://www.spinics.net/lists/lvm/msg21756.html|title=Re: LVM snapshot with Clustered VG [SOLVED] |date=15 Mar 2013 |access-date=2015-06-08}} It saw massive development during 2016.{{cite web | url = https://sourceware.org/git/?p=lvm2.git;a=history;f=lib/locking/lvmlockd.c;h=master;hb=HEAD | title = "vmlockd.c git history" | archive-url = https://archive.today/20240104093232/https://sourceware.org/git/?p=lvm2.git;a=history;f=lib/locking/lvmlockd.c;h=master;hb=HEAD | archive-date = January 4, 2024 | url-status = live }}

The above described mechanisms only resolve the issues with LVM's access to the storage. The file system selected to be on top of such LVs must either support clustering by itself (such as GFS2 or VxFS) or it must only be mounted by a single cluster node at any time (such as in an active-passive configuration).

= Volume group allocation policy =

LVM VGs must contain a default allocation policy for new volumes created from it. This can later be changed for each LV using the lvconvert -A command, or on the VG itself via vgchange --alloc. To minimize fragmentation, LVM will attempt the strictest policy (contiguous) first and then progress toward the most liberal policy defined for the LVM object until allocation finally succeeds.

In RAID configurations, almost all policies are applied to each leg in isolation. For example, even if a LV has a policy of cling, expanding the file system will not result in LVM using a PV if it is already used by one of the other legs in the RAID setup. LVs with RAID functionality will put each leg on different PVs, making the other PVs unavailable to any other given leg. If this was the only option available, expansion of the LV would fail. In this sense, the logic behind cling will only apply to expanding each of the individual legs of the array.

Available allocation policies are:

  • Contiguous – forces all LEs in a given LV to be adjacent and ordered. This eliminates fragmentation but severely reduces a LV expandability.
  • Cling – forces new LEs to be allocated only on PVs already used by an LV. This can help mitigate fragmentation as well as reduce vulnerability of particular LVs should a device go down, by reducing the likelihood that other LVs also have extents on that PV.
  • Normal – implies near-indiscriminate selection of PEs, but it will attempt to keep parallel legs (such as those of a RAID setup) from sharing a physical device.
  • Anywhere – imposes no restrictions whatsoever. Highly risky in a RAID setup as it ignores isolation requirements, undercutting most of the benefits of RAID. For linear volumes, it can result in increased fragmentation.

Implementation

File:Example LVM head.png

File:LVM1.svg

Typically, the first megabyte of each physical volume contains a mostly ASCII-encoded structure referred to as an "LVM header" or "LVM head". Originally, the LVM head used to be written in the first and last megabyte of each PV for redundancy (in case of a partial hardware failure); however, this was later changed to only the first megabyte. Each PV's header is a complete copy of the entire volume group's layout, including the UUIDs of all other PVs and of LVs, and allocation map of PEs to LEs. This simplifies data recovery if a PV is lost.

In the 2.6-series of the Linux Kernel, the LVM is implemented in terms of the device mapper, a simple block-level scheme for creating virtual block devices and mapping their contents onto other block devices. This minimizes the amount of relatively hard-to-debug kernel code needed to implement the LVM. It also allows its I/O redirection services to be shared with other volume managers (such as EVMS). Any LVM-specific code is pushed out into its user-space tools, which merely manipulate these mappings and reconstruct their state from on-disk metadata upon each invocation.

To bring a volume group online, the "vgchange" tool:

  1. Searches for PVs in all available block devices.
  2. Parses the metadata header in each PV found.
  3. Computes the layouts of all visible volume groups.
  4. Loops over each logical volume in the volume group to be brought online and:
  5. Checks if the logical volume to be brought online has all its PVs visible.
  6. Creates a new, empty device mapping.
  7. Maps it (with the "linear" target) onto the data areas of the PVs the logical volume belongs to.

To move an online logical volume between PVs on the same Volume Group, use the "pvmove" tool:

  1. Creates a new, empty device mapping for the destination.
  2. Applies the "mirror" target to the original and destination maps. The kernel will start the mirror in "degraded" mode and begin copying data from the original to the destination to bring it into sync.
  3. Replaces the original mapping with the destination when the mirror comes into sync, then destroys the original.

These device mapper operations take place transparently, without applications or file systems being aware that their underlying storage is moving.

= Caveats =

  • Until Linux kernel 2.6.31,{{cite web|url = https://bugzilla.kernel.org/show_bug.cgi?id=9554|title = Bug 9554 – write barriers over device mapper are not supported|date = 2009-07-01|access-date = 2010-01-24}} write barriers were not supported (fully supported in 2.6.33). This means that the guarantee against filesystem corruption offered by journaled file systems like ext3 and XFS was negated under some circumstances.{{cite web|url = https://lwn.net/Articles/283161|title = Barriers and journaling filesystems|date = 2008-05-22|access-date = 2008-05-28|publisher = LWN}}
  • {{As of|2015}}, no online or offline defragmentation program exists for LVM. This is somewhat mitigated by fragmentation only happening if a volume is expanded and by applying the above-mentioned allocation policies. Fragmentation still occurs, however, and if it is to be reduced, non-contiguous extents must be identified and manually rearranged using the pvmove command.{{cite web|url = https://www.redhat.com/archives/linux-lvm/2010-April/msg00103.html|title = will pvmove'ing (an LV at a time) defragment?|date = 2010-04-29|access-date = 2015-05-22}}
  • On most LVM setups, only one copy of the LVM head is saved to each PV, which can make the volumes more susceptible to failed disk sectors. This behavior can be overridden using vgconvert --pvmetadatacopies. If the LVM can not read a proper header using the first copy, it will check the end of the volume for a backup header. Most Linux distributions keep a running backup in /etc/lvm/backup, which enables manual rewriting of a corrupted LVM head using the vgcfgrestore command.

See also

{{Portal|Linux}}

{{Div col|colwidth=25em}}

  • Btrfs (has its own "snapshots" that are different, but using LVM snapshots of btrfs leads to loss of both copies){{cite web |url=https://btrfs.wiki.kernel.org/index.php/Gotchas#Block-level_copies_of_devices |title=Gotchas |publisher=btrfs Wiki |access-date=2017-04-24 | archive-url = https://archive.today/20240104093717/https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/index.php/Gotchas.html%23Block-level_copies_of_devices | archive-date = January 4, 2024 | url-status = live}}
  • Device mapper
  • Logical Disk Manager (LDM)
  • Logical volume management
  • Snapshot (computer storage)
  • Storage virtualization
  • ZFS

{{div col end}}

References

{{Reflist|30em}}

Further reading

{{Refbegin|35em}}

  • {{cite web|title = LVM HOWTO|first = AJ|last = Lewis|publisher = Linux Documentation Project|date = 2006-11-27|access-date = 2008-03-04|url = https://tldp.org/HOWTO/LVM-HOWTO}}.
  • {{US patent reference|number = 5129088|y = 1992|m = 7|d = 7|inventor = Auslander, et al.|title = Data Processing Method to Create Virtual Disks from Non-Contiguous Groups of Logically Contiguous Addressable Blocks of Direct Access Storage Device}} (fundamental patent).
  • {{cite web |url=http://www.techmagazinez.com/2013/08/redhat-linux-logical-volume-manager.html |title=RedHat Linux: What is Logical Volume Manager or LVM? |work=techmagazinez.com |date=6 August 2013 |access-date=4 September 2013 |archive-url=https://web.archive.org/web/20130810011929/http://www.techmagazinez.com/2013/08/redhat-linux-logical-volume-manager.html |archive-date=10 August 2013 |url-status=dead }}
  • {{cite web |url=https://sourceware.org/lvm2/ |title=LVM2 Resource Page |work=sourceware.org |date=8 June 2012 |access-date=4 September 2013}}
  • {{cite web |url=https://www.debuntu.org/how-to-install-ubuntu-on-lvm-partitions/ |title=How-To: Install Ubuntu on LVM partitions |work=Debuntu.org |date=28 July 2007 |access-date=4 September 2013}}
  • {{cite web |url=https://www.markus-gattol.name/ws/lvm.html |title=Logical Volume Manager |work=markus-gattol.name |date=13 July 2013}}

{{Refend}}

{{Linux}}

{{Linux kernel}}

Category:Volume manager

Category:Linux file system-related software

Category:Linux kernel features

Category:Red Hat software

fi:Looginen taltiohallinta