NVM Express

{{short description|Interface used for connecting storage devices}}

{{Infobox technology standard

| title = NVM Express

| long_name = Non-Volatile Memory Host Controller Interface Specification

| abbreviation = NVMe

| image = NVM Express logo.svg{{!}}class=skin-invert

| status = Published

| year_started = {{start date and age|2011}}

| website = {{URL|https://nvmexpress.org}}

| first_published =

| organization = NVM Express, Inc. (since 2014)
NVM Express Work Group (before 2014)

| version = 2.1

| version_date = {{start date and age|2024|8|5}}

}}

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interface specification for accessing a computer's non-volatile storage media usually attached via the PCI Express bus. The initial NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCIe add-in cards, and M.2 cards, the successor to mSATA cards. NVM Express, as a logical-device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.{{cite web |url=https://nvmexpress.org/ |title=NVM Express |publisher=NVM Express, Inc. |access-date=2017-01-24 |quote=NVMe is designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies. |archive-date=2019-12-05 |archive-url=https://web.archive.org/web/20191205093351/https://nvmexpress.org/ |url-status=live }}

Architecturally, the logic for NVMe is physically stored within and executed by the NVMe controller chip that is physically co-located with the storage media, usually an SSD. Version changes for NVMe, e.g., 1.3 to 1.4, are incorporated within the storage media, and do not affect PCIe-compatible components such as motherboards and CPUs.{{cite web |url=https://www.anandtech.com/show/14543/nvme-14-specification-published. |title=NVMe 1.4 Specification Published: Further Optimizing Performance and Reliability |website=AnandTech |first1=Billy |last1=Tallis |date=June 14, 2019 |archive-url=https://web.archive.org/web/20210127014339/https://www.anandtech.com/show/14543/nvme-14-specification-published |archive-date=2021-01-27}}

By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency. The previous interface protocols like AHCI were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.

NVM Express devices are chiefly available in the form of standard-sized PCI Express expansion cards{{cite web |url=http://www.tomshardware.com/reviews/intel-ssd-dc-p3700-nvme,3858-3.html |title=Intel SSD DC P3700 800GB and 1.6TB Review: The Future of Storage |date=2014-08-13 |access-date=2014-11-21 |author=Drew Riley |website=Tom's Hardware }} and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639).{{cite web |url=http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ssd-dc-p3600-spec.pdf |title=Intel Solid-State Drive DC P3600 Series |date=2015 |access-date=2015-04-11 |publisher=Intel |pages=18, 20–22 |url-status=dead |archive-url=https://web.archive.org/web/20151028163623/http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ssd-dc-p3600-spec.pdf |archive-date= Oct 28, 2015 }}{{cite web |url=http://www.tomshardware.com/news/sff-8639-u.2-pcie-ssd-nvme,29321.html |title=SFFWG Renames PCIe SSD SFF-8639 Connector To U.2 |date=2015-06-05 |access-date=2015-06-09 |author=Paul Alcorn |publisher=Tom's Hardware}} Storage devices using U.2 and the M.2 specification which support NVM Express as the logical-device interface are a popular use-case for NVMe and have become a dominant form of solid-state storage for servers, desktops, and laptops alike.

Specifications

Specifications for NVMe released to date include:[https://nvmexpress.org/specifications/ NVMe Specifications]

  • 1.0e (January 2013)
  • 1.1b (July 2014) that adds standardized Command Sets to achieve better compatibility across different NVMe devices, Management Interface that provides standardized tools for managing NVMe devices, simplifying administration and Transport Specifications that defines how NVMe commands are transported over various physical interfaces, enhancing interoperability.{{cite web | url=https://nvmexpress.org/specifications/ | title=Specifications - NVM Express | date=10 January 2020 | access-date=10 July 2024 | archive-date=25 July 2024 | archive-url=https://web.archive.org/web/20240725130426/https://nvmexpress.org/specifications/ | url-status=live }}
  • 1.2 (November 2014)
  • 1.2a (October 2015)
  • 1.2b (June 2016)
  • 1.2.1 (June 2016) that introduces the following new features over version 1.1b: Multi-Queue to supports multiple I/O queues, enhancing data throughput and performance, Namespace Management that allows for dynamic creation, deletion, and resizing of namespaces, providing greater flexibility, and Endurance Management to monitor and manage SSD wear levels, optimizing performance and extending drive life.{{cite web | url=https://nvmexpress.org/nvm-express-releases-1-2-specification/ | title=NVM Express Releases 1.2 Specification - NVM Express | date=12 November 2014 | access-date=23 November 2024 | archive-date=25 July 2024 | archive-url=https://web.archive.org/web/20240725122944/https://nvmexpress.org/nvm-express-releases-1-2-specification/ | url-status=live }}
  • 1.3 (May 2017)
  • 1.3a (October 2017)
  • 1.3b (May 2018)
  • 1.3c (May 2018)
  • 1.3d (March 2019) that since version 1.2.1 added Namespace Sharing to allow multiple hosts accessing a single namespace, facilitating shared storage environments, Namespace Reservation to provides mechanisms for hosts to reserve namespaces, preventing conflicts and ensuring data integrity, and Namespace Priority that sets priority levels for different namespaces, optimizing performance for critical workloads.{{cite web | url=https://nvmexpress.org/webcast-nvme-1-3-learn-whats-new/ | title=WEBCAST: NVME 1.3 – LEARN WHat's NEW - NVM Express | date=30 June 2017 | access-date=23 November 2024 | archive-date=13 April 2024 | archive-url=https://web.archive.org/web/20240413160238/https://nvmexpress.org/webcast-nvme-1-3-learn-whats-new/ | url-status=live }}{{cite web | url=https://nvmexpress.org/changes-in-nvme-revision-1-3/ | title=Changes in NVMe Revision 1.3 - NVM Express | date=May 2017 }}
  • 1.4 (June 2019)
  • 1.4a (March 2020)
  • 1.4b (September 2020)
  • 1.4c (June 2021), that has the following new features compared to 1.3d: IO Determinism to ensure consistent latency and performance by isolating workloads, Namespace Write Protect for preventing data corruption or unauthorized modifications, Persistent Event Log that stores event logs in non-volatile memory, aiding in diagnostics and troubleshooting, and Verify Command that checks the integrity of data.{{cite web | url=https://nvmexpress.org/answering-your-questions-nvme-1-4-features-and-compliance-everything-you-need-to-know/ | title=Answering Your Questions: NVMe™ 1.4 Features and Compliance: Everything You Need to Know - NVM Express | date=16 October 2019 | access-date=23 November 2024 | archive-date=14 July 2024 | archive-url=https://web.archive.org/web/20240714111035/https://nvmexpress.org/answering-your-questions-nvme-1-4-features-and-compliance-everything-you-need-to-know/ | url-status=live }}{{cite web | url=https://nvmexpress.org/resource/nvme-1-4-features-and-compliance-everything-you-need-to-know/ | title=NVMe 1.4 Features and Compliance: Everything You Need to Know - NVM Express | date=2 October 2019 }}
  • 2.0 (May 2021){{cite press release |title=NVM Express Announces the Rearchitected NVMe 2.0 Library of Specifications |url=https://nvmexpress.org/nvm-express-announces-the-rearchitected-nvme-2-0-library-of-specifications/ |location=Beaverton, Oregon, USA |publisher=NVM Express, Inc. |date=June 3, 2021 |access-date=2024-03-31 |archive-date=2023-01-18 |archive-url=https://web.archive.org/web/20230118185928/https://nvmexpress.org/nvm-express-announces-the-rearchitected-nvme-2-0-library-of-specifications/ |url-status=live }}
  • 2.0a (July 2021)
  • 2.0b (January 2022)
  • 2.0c (October 2022)
  • 2.0d (January 2024),{{cite web |url=https://nvmexpress.org/wp-content/uploads/NVM-Express-Base-Specification-2.0d-2024.01.11-Ratified.pdf |title=NVM Express Base Specification 2.0d |date=January 11, 2024 |website=nvmexpress.org |publisher=NVM Express, Inc. |access-date=2024-03-26 |archive-date=2024-03-26 |archive-url=https://web.archive.org/web/20240326183355/https://nvmexpress.org/wp-content/uploads/NVM-Express-Base-Specification-2.0d-2024.01.11-Ratified.pdf |url-status=live }} that, compared to 1.4c, introduces Zoned Namespaces (ZNS) to organize data into zones for efficient write operations, reducing write amplification and improving SSD longevity, Key Value (KV) for efficient storage and retrieval of key-value pairs directly on the NVMe device, bypassing traditional file systems, Endurance Group Management to manages groups of SSDs based on their endurance, optimizing usage and extending lifespan.{{cite web | url=https://nvmexpress.org/everything-you-need-to-know-about-the-nvme-2-0-specifications-and-new-technical-proposals/ | title=Everything You Need to Know About the NVMe 2.0 Specifications and New Technical Proposals - NVM Express | date=3 June 2021 }}{{cite web | url=https://archive.nvmexpress.org/everything-you-need-to-know-about-the-nvme-2-0-specifications-and-new-technical-proposals/ | title=Everything You Need to Know About the NVMe® 2.0 Specifications and New Technical Proposals }}
  • 2.1 (August 2024){{cite web |url=https://nvmexpress.org/wp-content/uploads/NVM-Express-Base-Specification-Revision-2.1-2024.08.05-Ratified.pdf |title=NVM Express® Base Specification, Revision 2.1 |date=August 5, 2024 |website=nvmexpress.org |publisher=NVM Express, Inc. |access-date=2024-08-10}} that introduces Live Migration to maintaining service availability during migration, Key Per I/O for applying encryption keys at a per-operation level, NVMe-MI High Availability Out of Band Management for managing NVMe devices outside of regular data paths, and NVMe Network Boot / UEFI for booting NVMe devices over a network.{{cite web|url=https://nvmexpress.org/everything-you-need-to-know-an-essential-overview-of-nvm-express-2-1-base-specification-and-new-key-features/|title=Everything You Need to Know: An Essential Overview of NVM Express® 2.1 Base Specification and New Key Features|date=6 August 2024|access-date=23 November 2024|archive-date=13 September 2024|archive-url=https://web.archive.org/web/20240913081658/https://nvmexpress.org/everything-you-need-to-know-an-essential-overview-of-nvm-express-2-1-base-specification-and-new-key-features/|url-status=live}}

Background

{{Multiple image

| direction = vertical

| width = 300

| image1 = Intel SSD 750 series, 400 GB add-in card model, top view.jpg

| image2 = Intel SSD 750 series, 400 GB add-in card model, bottom view.jpg

| footer = Intel SSD 750 series, an SSD that uses NVM Express, in form of a PCI Express 3.0 ×4 expansion card (front and rear views)

}}

Historically, most SSDs used buses such as SATA,https://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme SAS,https://www.tweaktown.com/news/25459/fms_2012_hgst_unveils_worlds_first_12gb_s_sas_enterprise_ssd/index.htmlhttps://www.storagereview.com/review/stec-s840-enterprise-ssd-review or Fibre Channel for interfacing with the rest of a computer system. Since SSDs became available in mass markets, SATA has become the most typical way for connecting SSDs in personal computers; however, SATA was designed primarily for interfacing with mechanical hard disk drives (HDDs), and it became increasingly inadequate for SSDs, which improved in speed over time.{{cite web |last=Walker |first=Don H. |title=A Comparison of NVMe and AHCI |url=https://www.sata-io.org/sites/default/files/documents/NVMe%20and%20AHCI_%20_long_.pdf |work=31 July 2012 |publisher=SATA-IO |access-date=3 July 2013 |archive-date=12 February 2019 |archive-url=https://web.archive.org/web/20190212011912/https://sata-io.org/sites/default/files/documents/NVMe%20and%20AHCI_%20_long_.pdf |url-status=dead}} For example, within about five years of mass market mainstream adoption (2005–2010) many SSDs were already held back by the comparatively slow data rates available for hard drives—unlike hard disk drives, some SSDs are limited by the maximum throughput of SATA.

High-end SSDs had been made using the PCI Express bus before NVMe, but using non-standard specification interfaces, using a SAS to PCIe bridgehttps://www.anandtech.com/show/8104/intel-ssd-dc-p3700-review-the-pcie-ssd-transition-begins-with-nvme or by emulating a hardware RAID controller.{{cite web | url=https://www.tweaktown.com/reviews/5921/asus-rog-raidr-express-240gb-pcie-ssd-review/index.html | title=ASUS ROG RAIDR Express 240GB PCIe SSD Review | date=6 December 2013 }} By standardizing the interface of SSDs, operating systems only need one common device driver to work with all SSDs adhering to the specification. It also means that each SSD manufacturer does not have to design specific interface drivers. This is similar to how USB mass storage devices are built to follow the USB mass-storage device class specification and work with all computers, with no per-device drivers needed.{{cite web |url=https://nvmexpress.org/wp-content/uploads/2013/04/NVM_whitepaper.pdf |title=NVM Express Explained |date=9 April 2014 |access-date=21 March 2015 |website=nvmexpress.org |archive-date=24 August 2015 |archive-url=https://web.archive.org/web/20150824040145/http://www.nvmexpress.org/wp-content/uploads/2013/04/NVM_whitepaper.pdf |url-status=live }}

NVM Express devices are also used as the building block of the burst buffer storage in many leading supercomputers, such as Fugaku Supercomputer, Summit Supercomputer and Sierra Supercomputer, etc.{{cite web |title=Using LC's Sierra Systems |url=https://hpc.llnl.gov/training/tutorials/using-lcs-sierra-system |access-date=2020-06-25 |website=hpc.llnl.gov}}{{cite web |title=SummitDev User Guide |url=https://docs.olcf.ornl.gov/systems/summitdev_user_guide.html |access-date=2020-06-25 |website=olcf.ornl.gov |archive-date=2020-08-06 |archive-url=https://web.archive.org/web/20200806000635/https://docs.olcf.ornl.gov/systems/summitdev_user_guide.html |url-status=dead}}

History

The first details of a new standard for accessing non-volatile memory emerged at the Intel Developer Forum 2007, when NVMHCI was shown as the host-side protocol of a proposed architectural design that had Open NAND Flash Interface Working Group (ONFI) on the memory (flash) chips side.{{cite web |url=http://www.theinquirer.net/inquirer/news/1018710/speeding-flash-flash |archive-url=https://web.archive.org/web/20090918093831/http://www.theinquirer.net/inquirer/news/1018710/speeding-flash-flash |url-status=dead |archive-date=September 18, 2009 |title=Speeding up Flash... in a flash |publisher=The Inquirer |date=2007-10-13 |access-date=2014-01-11}} A NVMHCI working group led by Intel was formed that year. The NVMHCI 1.0 specification was completed in April 2008 and released on Intel's web site.{{cite web|url=http://www.bswd.com/FMS09/FMS09-T2A-Huffman.pdf|title=Extending the NVMHCI Standard to Enterprise|date=August 2009|archive-url=https://web.archive.org/web/20170617032922/http://www.bswd.com/FMS09/FMS09-T2A-Huffman.pdf|archive-date=2017-06-17|publisher=Flash Memory Summit|location=Santa Clara, CA USA}}{{cite web |url=http://www.theinquirer.net/inquirer/news/1018442/nvram-standard-tips |archive-url=https://web.archive.org/web/20140111131722/http://www.theinquirer.net/inquirer/news/1018442/nvram-standard-tips |url-status=dead |archive-date=January 11, 2014 |title=Flash new standard tips up |publisher=The Inquirer |date=2008-04-16 |access-date=2014-01-11}}{{cite web|url=http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2008/20080813_T2A_Huffman.pdf|title=NVMHCI: The Optimized Interface for Caches and SSDs|date=August 2008|author=Amber Huffman|publisher=Flash Memory Summit|location=Santa Clara, CA USA|access-date=2014-01-11|archive-date=2016-03-04|archive-url=https://web.archive.org/web/20160304025032/http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2008/20080813_T2A_Huffman.pdf|url-status=live}}

Technical work on NVMe began in the second half of 2009.{{cite web|url=https://files.futurememorystorage.com/proceedings/2013/20130813_A12_Onufryk.pdf|title=What's New in NVMe 1.1 and Future Directions|author=Peter Onufryk|date=2013|publisher=Flash Memory Summit|location=Santa Clara, CA USA }} The NVMe specifications were developed by the NVM Express Workgroup, which consists of more than 90 companies; Amber Huffman of Intel was the working group's chair. Version 1.0 of the specification was released on 1 March 2011,{{Cite news |title=New Promoter Group Formed to Advance NVM Express |work=Press release |date=June 1, 2011 |url=https://nvmexpress.org/wp-content/uploads/2013/04/NVMe_Press_Release_New-Promoter-Group_20110601.pdf |access-date=September 18, 2013 |archive-date=December 30, 2013 |archive-url=https://web.archive.org/web/20131230233427/http://www.nvmexpress.org/wp-content/uploads/2013/04/NVMe_Press_Release_New-Promoter-Group_20110601.pdf |url-status=live }} while version 1.1 of the specification was released on 11 October 2012.{{cite web |title=NVM Express Revision 1.1 |date=October 11, 2012 |editor=Amber Huffman |work=Specification |url=https://nvmexpress.org/wp-content/uploads/2013/05/NVM_Express_1_1.pdf |access-date=September 18, 2013}} Major features added in version 1.1 are multi-path I/O (with namespace sharing) and arbitrary-length scatter-gather I/O. It is expected that future revisions will significantly enhance namespace management. Because of its feature focus, NVMe 1.1 was initially called "Enterprise NVMHCI".{{Cite web |url=http://snia.org/sites/default/files2/SPDEcon2013/presentations/Storage%20Plumbing/DavidDeming_PCle-based_Storage_r1.pdf |title=PCIe-based Storage |date=2013-06-08 |access-date=2014-01-12 |author=David A. Deming |website=snia.org |url-status=dead |archive-url=https://web.archive.org/web/20130920064556/http://snia.org/sites/default/files2/SPDEcon2013/presentations/Storage%20Plumbing/DavidDeming_PCle-based_Storage_r1.pdf |archive-date=2013-09-20}} An update for the base NVMe specification, called version 1.0e, was released in January 2013.{{cite web |title=NVM Express Revision 1.0e |date=January 23, 2013 |editor=Amber Huffman |work=Specification |url=https://nvmexpress.org/wp-content/uploads/2013/04/NVM_10e_specification.pdf |access-date=September 18, 2013}} In June 2011, a Promoter Group led by seven companies was formed.

The first commercially available NVMe chipsets were released by Integrated Device Technology (89HF16P04AG3 and 89HF32P08AG3) in August 2012.{{cite web |url=http://www.theinquirer.net/inquirer/news/2200157/idt-releases-two-nvme-pciexpress-ssd-controllers |archive-url=https://web.archive.org/web/20120824032335/http://www.theinquirer.net/inquirer/news/2200157/idt-releases-two-nvme-pciexpress-ssd-controllers |url-status=dead |archive-date=August 24, 2012 |title=IDT releases two NVMe PCI-Express SSD controllers |publisher=The Inquirer |date=2012-08-21 |access-date=2014-01-11}}{{cite web |url=http://www.thessdreview.com/daily-news/latest-buzz/idt-shows-off-the-first-nvme-pcie-ssd-processor-and-reference-design-fms-2012-update/ |title=IDT Shows Off The First NVMe PCIe SSD Processor and Reference Design - FMS 2012 Update |publisher=The SSD Review |date=2012-08-24 |access-date=2014-01-11 |archive-date=2016-01-01 |archive-url=https://web.archive.org/web/20160101214653/http://www.thessdreview.com/daily-news/latest-buzz/idt-shows-off-the-first-nvme-pcie-ssd-processor-and-reference-design-fms-2012-update/ |url-status=live }} The first NVMe drive, Samsung's XS1715 enterprise drive, was announced in July 2013; according to Samsung, this drive supported 3 GB/s read speeds, six times faster than their previous enterprise offerings.{{cite web |url=http://www.storagereview.com/samsung_announces_industry_s_first_25inch_nvme_ssd |title=Samsung Announces Industry's First 2.5-inch NVMe SSD | StorageReview.com - Storage Reviews |website=StorageReview.com |date=2013-07-18 |access-date=2014-01-11 |archive-url=https://web.archive.org/web/20140110200756/http://www.storagereview.com/samsung_announces_industry_s_first_25inch_nvme_ssd |archive-date=2014-01-10 |url-status=dead}} The LSI SandForce SF3700 controller family, released in November 2013, also supports NVMe.{{cite web |url=http://www.storagereview.com/lsi_sf3700_sandforce_flash_controller_line_unveiled |title=LSI SF3700 SandForce Flash Controller Line Unveiled | StorageReview.com - Storage Reviews |website=StorageReview.com |date=2013-11-18 |access-date=2014-01-11 |archive-url=https://web.archive.org/web/20140111022326/http://www.storagereview.com/lsi_sf3700_sandforce_flash_controller_line_unveiled |archive-date=2014-01-11 |url-status=dead}}{{cite web |url=http://hothardware.com/News/LSI-Introduces-Blazing-Fast-SF3700-Series-SSD-Controller-Supports-Both-PCIe-and-SATA-6Gbps/ |title=LSI Introduces Blazing Fast SF3700 Series SSD Controller, Supports Both PCIe and SATA 6 Gbps |work=hothardware.com |access-date=21 March 2015 |archive-date=5 March 2016 |archive-url=https://web.archive.org/web/20160305231734/http://hothardware.com/news/lsi-introduces-blazing-fast-sf3700-series-ssd-controller-supports-both-pcie-and-sata-6gbps |url-status=dead}} A Kingston HyperX "prosumer" product using this controller was showcased at the Consumer Electronics Show 2014 and promised similar performance.{{cite web |url=http://www.tomshardware.com/news/kingston-pcie-ssd,25600.html |title=Kingston Unveils First PCIe SSD: 1800 MB/s Read Speeds |author=Jane McEntegart |work=Tom's Hardware |date=7 January 2014 |access-date=21 March 2015}}{{cite web |url=http://hothardware.com/News/Kingston-HyperX-Predator-PCI-Express-SSD-Unveiled--With-LSI-Sandforce-SF3700-Flash-Controller/ |title=Kingston HyperX Predator PCI Express SSD Unveiled With LSI SandForce SF3700 PCIe Flash Controller |work=hothardware.com |access-date=21 March 2015 |archive-date=28 May 2016 |archive-url=https://web.archive.org/web/20160528221546/http://hothardware.com/news/kingston-hyperx-predator-pci-express-ssd-unveiled--with-lsi-sandforce-sf3700-flash-controller |url-status=dead}} In June 2014, Intel announced their first NVM Express products, the Intel SSD data center family that interfaces with the host through PCI Express bus, which includes the DC P3700 series, the DC P3600 series, and the DC P3500 series.{{cite web |url=http://www.intel.com/content/www/us/en/solid-state-drives/intel-ssd-dc-family-for-pcie.html |title=Intel® Solid-State Drive Data Center Family for PCIe* |work=Intel |access-date=21 March 2015}} {{As of|2014|11}}, NVMe drives are commercially available.

In March 2014, the group incorporated to become NVM Express, Inc., which {{As of|2014|11|lc=yes}} consists of more than 65 companies from across the industry. NVM Express specifications are owned and maintained by NVM Express, Inc., which also promotes industry awareness of NVM Express as an industry-wide standard. NVM Express, Inc. is directed by a thirteen-member board of directors selected from the Promoter Group, which includes Cisco, Dell, EMC, HGST, Intel, Micron, Microsoft, NetApp, Oracle, PMC, Samsung, SanDisk and Seagate.{{cite web |title= NVM Express Organization History |url=https://nvmexpress.org/about/company-history/ |website=NVM Express |access-date=23 December 2015 |archive-url=https://web.archive.org/web/20151123014820/http://www.nvmexpress.org/about/company-history/ |archive-date=23 November 2015 |url-status=dead}}

In September 2016, the CompactFlash Association announced that it would be releasing a new memory card specification, CFexpress, which uses NVMe.{{citation needed|date=April 2018}}

NVMe Host Memory Buffer (HMB) feature added in version 1.2 of the NVMe specification.{{cite news |url=https://www.anandtech.com/show/12819/the-toshiba-rc100-ssd-review |first=Billy |last=Tallis |date=June 14, 2018 |title=The Toshiba RC100 SSD Review: Tiny Drive In A Big Market |work=AnandTech |access-date=2024-03-30}} HMB allows SSDs to utilize the host's DRAM, which can improve the I/O performance for DRAM-less SSDs.{{cite journal | doi=10.1371/journal.pone.0229645 | doi-access=free | title=HMB in DRAM-less NVMe SSDS: Their usage and effects on performance | year=2020 | last1=Kim | first1=Kyusik | last2=Kim | first2=Taeseok | journal=PLOS ONE | volume=15 | issue=3 | pages=e0229645 | pmid=32119705 | bibcode=2020PLoSO..1529645K | pmc=7051071 }} For example, HMB can be used for cache the FTL table by the SSD controller, which can improve I/O performance.{{Cite journal |last1=Kim |first1=Kyusik |last2=Kim |first2=Seongmin |last3=Kim |first3=Taeseok |date=2020-06-24 |title=HMB-I/O: Fast Track for Handling Urgent I/Os in Nonvolatile Memory Express Solid-State Drives |journal=Applied Sciences |language=en |volume=10 |issue=12 |pages=4341 |doi=10.3390/app10124341 |doi-access=free |issn=2076-3417}} NVMe 2.0 added optional Zoned Namespaces (ZNS) feature and Key-Value (KV) feature, and support for rotating media such as hard disk drives. ZNS and KV allows data to be mapped directly to its physical location in flash memory to directly access data on an SSD.{{cite web | url=https://www.eetimes.com/nvme-gets-refactored/ | title=NVMe Gets Refactored | date=30 June 2021 | access-date=27 February 2024 | archive-date=27 February 2024 | archive-url=https://web.archive.org/web/20240227234011/https://www.eetimes.com/nvme-gets-refactored/ | url-status=live }} ZNS and KV can also decrease write amplification of flash media.

Form factors

There are many form factors of NVMe solid-state drive, such as AIC, U.2, U.3, M.2 etc.

= AIC (add-in card) =

Almost all early NVMe solid-state drives are HHHL (half height, half length) or FHHL (full height, half length) AIC, with a PCIe 2.0 or 3.0 interface. A HHHL NVMe solid-state drive card is easy to insert into a PCIe slot of a server.

= SATA Express, U.2 and U.3 (SFF-8639) =

{{Main|SATA Express}}

SATA Express allows the use of two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time). SATA Express supports NVMe as the logical device interface for attached PCI Express storage devices. It is electrically compatible with MultiLink SAS, so a backplane can support both at the same time.

{{Main|U.2}}

U.2, formerly known as SFF-8639, uses the same physical port as SATA Express but allows up to four PCI Express lanes. Available servers can combine up to 48 U.2 NVMe solid-state drives.{{Cite web |url=https://www.supermicro.com/en/products/nvme |title=All-Flash NVME Servers for Advanced Computing Supermicro |publisher=Supermicro |language=en-US |access-date=2022-07-22}}

U.3 (SFF-TA-1001) is built on the U.2 spec and uses the same SFF-8639 connector. Unlike in U.2, a single "tri-mode" (PCIe/SATA/SAS) backplane receptacle can handle all three types of connections; the controller automatically detects the type of connection used. This is unlike U.2, where users need to use separate controllers for SATA/SAS and NVMe. U.3 devices are required to be backwards-compatible with U.2 hosts, but U.2 drives are not compatible with U.3 hosts.{{cite web |last1=Siebenmann |first1=Chris |title=U.2, U.3, and other server NVMe drive connector types (in mid 2022) |url=https://utcc.utoronto.ca/~cks/space/blog/tech/ServerNVMeU2U3AndOthers2022 |access-date=2025-01-22}}{{cite web |last1=McRobert |first1=Kyle |title=What you need to know about U.3 |url=https://quarch.com/news/what-you-need-know-about-u3/ |website=Quarch Technology |access-date=2025-01-22}}

= M.2 =

{{Main|M.2}}

M.2, formerly known as the Next Generation Form Factor (NGFF), uses a M.2 NVMe solid-state drive computer bus. Interfaces provided through the M.2 connector are PCI Express 3.0 or higher (up to four lanes).

= EDSFF =

{{main|Enterprise and Data Center Standard Form Factor}}

NVMe-oF

NVM Express over Fabrics (NVMe-oF) is the concept of using a transport protocol over a network to connect remote NVMe devices, contrary to regular NVMe where physical NVMe devices are connected to a PCIe bus either directly or over a PCIe switch to a PCIe bus. In August 2017, a standard for using NVMe over Fibre Channel (FC) was submitted by the standards organization International Committee for Information Technology Standards (ICITS), and this combination is often referred to as FC-NVMe or sometimes NVMe/FC.{{cite web |url=https://searchstorage.techtarget.com/definition/NVMe-over-FC-Nonvolatile-Memory-Express-over-Fibre-Channel |title=NVMe over Fibre Channel (NVMe over FC) or FC-NVMe standard |website=Tech Target |date=January 1, 2018 |access-date=May 26, 2021}}

As of May 2021, supported NVMe transport protocols are:

  • FC, FC-NVMe{{cite web |url=https://standards.incits.org/apps/group_public/download.php/87364/T11-2017-00020-v003.pdf |title=FC-NVMe rev 1.14 (T11/16-020vB) |website=INCITS |date=April 19, 2017 |access-date=May 26, 2021 |archive-date=April 10, 2022 |archive-url=https://web.archive.org/web/20220410213054/https://standards.incits.org/apps/group_public/download.php/87364/T11-2017-00020-v003.pdf |url-status=dead }}
  • TCP, NVMe/TCP{{cite web |url=https://nvmexpress.org/developers/nvme-of-specification/ |title=NVMe-oF Specification |website=NVMexpress |date= 15 April 2020|access-date=May 26, 2021}}
  • Ethernet, RoCE v1/v2 (RDMA over converged Ethernet){{cite web |url=https://cw.infinibandta.org/document/dl/7148 |title=Supplement to InfiniBandTMArchitecture Specification Volume 1 Release 1.2.1 |website=Infiniband |date=September 2, 2014 |access-date=May 26, 2021 |archive-date=March 9, 2016 |archive-url=https://web.archive.org/web/20160309123709/https://cw.infinibandta.org/document/dl/7148 |url-status=dead }}
  • InfiniBand, NVMe over InfiniBand or NVMe/IB{{cite web |url=https://www.storagereview.com/review/nvme-nvme-of-background-overview |title=What is NVMe-oF? |website=Storage Review |date=June 27, 2020 |access-date=May 26, 2021}}

The standard for NVMe over Fabrics was published by NVM Express, Inc. in 2016.{{cite web |title=NVM Express over Fabrics Revision 1.0 |url=https://nvmexpress.org/wp-content/uploads/NVMe_over_Fabrics_1_0_Gold_20160605.pdf |publisher=NVM Express, Inc. |date=5 June 2016 |access-date=24 April 2018 |archive-date=30 January 2019 |archive-url=https://web.archive.org/web/20190130172838/https://www.nvmexpress.org/wp-content/uploads/NVMe_over_Fabrics_1_0_Gold_20160605.pdf |url-status=live }}{{cite web |first=David |last=Woolf |title=What NVMe over Fabrics Means for Data Storage |url=https://www.networkcomputing.com/storage/what-nvme-over-fabrics-means-data-storage/1066956182 |date=February 9, 2018 |access-date=April 24, 2018 |archive-date=April 14, 2018 |archive-url=https://web.archive.org/web/20180414012049/https://www.networkcomputing.com/storage/what-nvme-over-fabrics-means-data-storage/1066956182 |url-status=live }}

The following software implements the NVMe-oF protocol:

  • Linux NVMe-oF initiator and target.{{cite web |first=Christoph |last=Hellwig |title=NVMe Over Fabrics Support in Linux |url=https://events.static.linuxfound.org/sites/events/files/slides/nvme-over-fabrics.pdf |date=July 17, 2016 |access-date=April 24, 2018 |archive-date=April 14, 2018 |archive-url=https://web.archive.org/web/20180414011415/https://events.static.linuxfound.org/sites/events/files/slides/nvme-over-fabrics.pdf |url-status=live }} RoCE transport was supported initially, and with Linux kernel 5.x, native support for TCP was added.{{cite web |url=https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp |author=Petros Koutoupis |title=Data in a Flash, Part III: NVMe over Fabrics Using TCP |website=Linux Journal |date=June 10, 2019 |access-date=May 26, 2021 |archive-date=April 27, 2021 |archive-url=https://web.archive.org/web/20210427112002/https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp |url-status=live }}
  • Storage Performance Development Kit (SPDK) NVMe-oF initiator and target drivers.{{cite web |first=Jonathan |last=Stern |title=Announcing the SPDK NVMf Target |url=http://www.spdk.io/feature/2016/06/07/announce-nvmf/ |date=7 June 2016}} Both RoCE and TCP transports are supported.{{cite web |url=https://ci.spdk.io/download/performance-reports/SPDK_rdma_perf_report_2101.pdf |title=SPDKNVMe-oFRDMA (Target & Initiator) Performance Report |website=SPDK |date=February 1, 2021 |access-date=May 26, 2021}}{{cite web |url=https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2101.pdf |title=SPDKNVMe-oFTCP (Target & Initiator) Performance Report |website=SPDK |date=February 1, 2020 |access-date=May 26, 2021 |archive-date=May 25, 2021 |archive-url=https://web.archive.org/web/20210525130041/https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2101.pdf |url-status=live }}
  • StarWind NVMe-oF initiator{{cite web |url=https://www.storagereview.com/review/hands-on-with-starwind-nvme-of-initiator-for-windows |title=Hands On with StarWind NVMe-oF Initiator for Windows |website=StorageReview |date=October 6, 2021 |access-date=October 6, 2021 |archive-date=October 7, 2021 |archive-url=https://web.archive.org/web/20211007072629/https://www.storagereview.com/review/hands-on-with-starwind-nvme-of-initiator-for-windows |url-status=live }} and target for Linux and Microsoft Windows, supporting both RoCE & TCP, and Fibre Channel transports.{{cite web |url=https://www.storagereview.com/review/starwind-san-nas-over-fibre-channel |title=StarWind SAN & NAS over Fibre Channel |website=StorageReview |date=July 20, 2022 |access-date=July 20, 2022 |archive-date=July 20, 2022 |archive-url=https://web.archive.org/web/20220720131420/https://www.storagereview.com/review/starwind-san-nas-over-fibre-channel |url-status=live }}
  • [https://www.lightbitslabs.com/nvme-over-tcp/ Lightbits Labs] NVMe over TCP target{{cite web |url=https://blocksandfiles.com/2022/06/09/intel-planning-big-lightbits-nvme-tcp-storage-push/ |title=Intel planning big Lightbits NVMe/TCP storage push |website=Blocks & Files |date=June 9, 2022 |access-date=June 9, 2022 |archive-date=July 6, 2022 |archive-url=https://web.archive.org/web/20220706041449/https://blocksandfiles.com/2022/06/09/intel-planning-big-lightbits-nvme-tcp-storage-push/ |url-status=live }} for various Linux distributions{{cite web |url=https://www.computerweekly.com/news/252462412/LightBits-Super-SSD-brings-NVMe-on-vanilla-Ethernet |title=LightBits Super SSD brings NVMe on vanilla Ethernet |website=ComputerWeekly |date=April 29, 2021 |access-date=April 29, 2021}} & public clouds.
  • Bloombase StoreSafe Intelligent Storage Firewall supports NVMe over RoCE, TCP, and Fibre Channel for transparent storage security protection.
  • [https://www.netapp.com/data-management/ontap-data-management-software/ NetApp ONTAP] supports iSCSI and NVMe over TCP{{Cite web |title=Announcing NVMe/TCP for ONTAP |url=https://www.netapp.com/blog/announcing-nvme-tcp-for-ontap/ |archive-url=http://web.archive.org/web/20240717131424/https://www.netapp.com/blog/announcing-nvme-tcp-for-ontap/ |archive-date=2024-07-17 |access-date=2025-01-23 |website=www.netapp.com |language=en-US}} targets.
  • [https://www.simplyblock.io/our-technology/ Simplyblock] storage platform with NVMe over Fabrics support.{{Cite web |last=Schmidt |first=Michael |date=2024-05-22 |title=How We Built Our Distributed Data Placement Algorithm |url=https://www.simplyblock.io/blog/how-we-build-our-distributed-data-placement-storage-algorithm/ |access-date=2025-01-23 |website=simplyblock |language=en-US}}

{{Anchor|VS-AHCI}}Comparison with AHCI

The Advanced Host Controller Interface (AHCI) has the benefit of wide software compatibility, but has the downside of not delivering optimal performance when used with SSDs connected via the PCI Express bus. As a logical-device interface, AHCI was developed when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media. As a result, AHCI introduces certain inefficiencies when used with SSD devices, which behave much more like RAM than like spinning media.{{cite web |url=https://www.sata-io.org/sites/default/files/documents/NVMe%20and%20AHCI%20as%20SATA%20Express%20Interface%20Options%20-%20Whitepaper_.pdf |title=AHCI and NVMe as Interfaces for SATA Express Devices – Overview |date=2013-08-09 |access-date=2013-10-02 |author=Dave Landsman |publisher=SATA-IO |archive-date=2013-10-05 |archive-url=https://web.archive.org/web/20131005000700/https://www.sata-io.org/sites/default/files/documents/NVMe%20and%20AHCI%20as%20SATA%20Express%20Interface%20Options%20-%20Whitepaper_.pdf |url-status=live }}

The NVMe device interface has been designed from the ground up, capitalizing on the lower latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, the basic advantages of NVMe over AHCI relate to its ability to exploit parallelism in host hardware and software, manifested by the differences in command queue depths, efficiency of interrupt processing, the number of uncacheable register accesses, etc., resulting in various performance improvements.{{Cite web |url=http://snia.org/sites/default/files2/SDC2013/presentations/FileSystems/AndyHeron_Enhancements_To_Win81_Storage.pdf |archive-url=https://web.archive.org/web/20140110193117/http://snia.org/sites/default/files2/SDC2013/presentations/FileSystems/AndyHeron_Enhancements_To_Win81_Storage.pdf |title=Advancements in Storage and File Systems in Windows 8.1 |year=2013 |access-date=2014-01-11 |archive-date=2014-01-10 |author=Andy Herron |website=snia.org}}{{rp|17–18}}

The table below summarizes high-level differences between the NVMe and AHCI logical-device interfaces.

class="wikitable" style="text-align: center; margin-left: auto; margin-right: auto;"

|+ High-level comparison of AHCI and NVMe

 

! AHCI

! NVMe

Maximum queue depth

|| One command queue;
Up to 32 commands per queue || Up to 65535 queues;{{cite web |title=NVM Express Base Specification Revision 1.4a |date=March 9, 2020 |author=Amber Huffman |work=Specification |at=section 1.4 Theory of Operation, p. 7 |url=https://nvmexpress.org/wp-content/uploads/NVM-Express-1_4a-2020.03.09-Ratified.pdf |access-date=May 16, 2020 |archive-date=December 13, 2023 |archive-url=https://web.archive.org/web/20231213113152/https://nvmexpress.org/wp-content/uploads/NVM-Express-1_4a-2020.03.09-Ratified.pdf |url-status=live }}
Up to 65536 commands per queue

Uncacheable register accesses
(2000 cycles each)

|| Up to six per non-queued command;
Up to nine per queued command || Up to two per command

Interrupt

|| A single interrupt || Up to 2048 MSI-X interrupts

Parallelism
and multiple threads

|| Requires synchronization lock
to issue a command || No locking

Efficiency
for 4 KB commands

|| Command parameters require
two serialized host DRAM fetches || Gets command parameters
in one 64-byte fetch

Data transmission

|| Usually half-duplex || Full-duplex

Host Memory Buffer (HMB)

|| No || Yes

Operating system support

File:The Linux Storage Stack Diagram.svg's storage stack{{Cite web |url=https://www.thomas-krenn.com/en/wiki/Linux_Storage_Stack_Diagram |title=Linux Storage Stack Diagram |date=2015-06-01 |access-date=2015-06-08 |author1=Werner Fischer |author2=Georg Schönberger |publisher=Thomas-Krenn.AG |archive-date=2019-06-29 |archive-url=https://web.archive.org/web/20190629213450/https://www.thomas-krenn.com/en/wiki/Linux_Storage_Stack_Diagram |url-status=live }}]]

; ChromeOS

: On February 24, 2015, support for booting from NVM Express devices was added to ChromeOS.{{cite web |url=https://nvmexpress.org/blog/chromeos-adds-boot-support-for-nvm-express/ |title= ChromeOS adds boot support for NVM Express |work=NVM Express |date=24 February 2015 |access-date=21 March 2015}}{{cite web |url=https://chromium.googlesource.com/chromiumos/platform/depthcharge/+/4f503189f7339c667b045ab80a949964ecbaf93e |title=4f503189f7339c667b045ab80a949964ecbaf93e - chromiumos/platform/depthcharge |work=Git at Google |first1=Jason B. |last1=Akers |date=Jan 22, 2015 |access-date=21 March 2015 |archive-date=23 August 2017 |archive-url=https://web.archive.org/web/20170823021433/https://chromium.googlesource.com/chromiumos/platform/depthcharge/+/4f503189f7339c667b045ab80a949964ecbaf93e |url-status=live }}

; DragonFly BSD

: The first release of DragonFly BSD with NVMe support is version 4.6.{{cite web |url=https://www.dragonflybsd.org/release46/ |title=release46 |website=DragonFly BSD |access-date=2016-09-08 |archive-date=2016-09-04 |archive-url=https://web.archive.org/web/20160904012548/http://www.dragonflybsd.org/release46/ |url-status=live }}

; FreeBSD

: Intel sponsored a NVM Express driver for FreeBSD's head and stable/9 branches.{{cite web |title=Log of /head/sys/dev/nvme |url=http://svnweb.freebsd.org/base/head/sys/dev/nvme/?view=log |work=FreeBSD source tree |publisher=The FreeBSD Project |access-date=16 October 2012 |archive-date=29 May 2013 |archive-url=https://web.archive.org/web/20130529021706/http://svnweb.freebsd.org/base/head/sys/dev/nvme/?view=log |url-status=live }}{{cite web |title=Log of /stable/9/sys/dev/nvme |url=http://svnweb.freebsd.org/base/stable/9/sys/dev/nvme/?view=log |work=FreeBSD source tree |publisher=The FreeBSD Project |access-date=3 July 2013 |archive-date=16 February 2018 |archive-url=https://web.archive.org/web/20180216215017/https://svnweb.freebsd.org/base/stable/9/sys/dev/nvme/?view=log |url-status=live }} The nvd(4) and nvme(4) drivers are included in the GENERIC kernel configuration by default since FreeBSD version 10.2 in 2015.{{cite web |title=FreeBSD 10.2-RELEASE Release Notes |url=https://www.freebsd.org/releases/10.2R/relnotes.html#kernel-config |publisher=The FreeBSD Project |access-date=5 August 2015 |archive-date=18 June 2017 |archive-url=https://web.archive.org/web/20170618081513/https://www.freebsd.org/releases/10.2R/relnotes.html#kernel-config |url-status=live }}

; Genode

: Support for consumer-grade NVMe was added to the Genode framework as part of the 18.05{{cite web |title=Release notes for the Genode OS Framework 18.05 |url=https://genode.org/documentation/release-notes/18.05#NVMe_storage_devices |website=genode.org}} release.

; Haiku

: Haiku gained support for NVMe on April 18, 2019.{{cite web |url=https://dev.haiku-os.org/ticket/9910 |title=#9910 NVMe devices support |website=dev.haiku-os.org |access-date=2019-04-18 |archive-date=2016-08-06 |archive-url=https://web.archive.org/web/20160806110839/https://dev.haiku-os.org/ticket/9910 |url-status=live }}{{cite web |url=https://www.haiku-os.org/blog/kallisti5/2019-04-16_nvme_driver_now_available/ |title=NVMe Driver Now Available - Haiku Project |website=www.haiku-os.org |access-date=2016-07-28}}

; illumos

: illumos received support for NVMe on October 15, 2014.{{cite web |url=https://github.com/illumos/illumos-gate/commit/3c9168fa8e9c30d55b3aa2fde74bd7da46df53f5 |title=4053 Add NVME Driver Support to Illumos |website=github.com |access-date=2016-05-23 |archive-date=2017-05-10 |archive-url=https://web.archive.org/web/20170510112656/https://github.com/illumos/illumos-gate/commit/3c9168fa8e9c30d55b3aa2fde74bd7da46df53f5 |url-status=live }}

; iOS

: With the release of the iPhone 6S and 6S Plus, Apple introduced the first mobile deployment of NVMe over PCIe in smartphones.{{Cite web |url=http://www.anandtech.com/show/9662/iphone-6s-and-iphone-6s-plus-preliminary-results |title=iPhone 6s and iPhone 6s Plus Preliminary Results |last=Ho |first=Joshua |website=AnandTech |date=September 28, 2015 |access-date=2016-06-01 |archive-date=2016-05-26 |archive-url=https://web.archive.org/web/20160526001956/http://www.anandtech.com/show/9662/iphone-6s-and-iphone-6s-plus-preliminary-results |url-status=live }} Apple followed these releases with the release of the first-generation iPad Pro and first-generation iPhone SE that also use NVMe over PCIe.{{cite web |url=https://www.anandtech.com/show/10285/the-iphone-se-review/2 |title=The iPhone SE Review |first=Brandon |last=Chester |date=May 16, 2016 |website=AnandTech}}

; {{Anchor|BLKMQ}}Linux

: Intel published an NVM Express driver for Linux on 3 March 2011,{{cite web |url=http://sb.lwn.net/Articles/431103/ |title=NVM Express driver |author=Matthew Wilcox |publisher=LWN.net |date=2011-03-03 |access-date=2013-11-05 |url-status=dead |archive-url=https://archive.today/20120717195616/http://sb.lwn.net/Articles/431103/ |archive-date=2012-07-17}}{{cite web |url=http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2013/20130812_PreConfD_Busch.pdf |title=Linux NVMe Driver |date=2013-08-12 |access-date=2013-11-05 |author=Keith Busch |website=flashmemorysummit.com |archive-date=2013-11-05 |archive-url=https://web.archive.org/web/20131105224356/http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2013/20130812_PreConfD_Busch.pdf |url-status=live }}{{Cite web |url=https://intel.activeevents.com/sf13/connect/fileDownload/session/FF44850B359CA1CD47D3E6A3437446FD/SF13_SSDL001_100.pdf |title=IDF13 Hands-on Lab: Compiling the NVM Express Linux Open Source Driver and SSD Linux Benchmarks and Optimizations |year=2013 |access-date=2014-01-11 |website=activeevents.com |url-status=dead |archive-url=https://web.archive.org/web/20140111004350/https://intel.activeevents.com/sf13/connect/fileDownload/session/FF44850B359CA1CD47D3E6A3437446FD/SF13_SSDL001_100.pdf |archive-date=2014-01-11}} which was merged into the Linux kernel mainline on 18 January 2012 and released as part of version 3.3 of the Linux kernel on 19 March 2012.{{cite web |url=https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=92b5abbb44e05cdbc4483219f30a435dd871a8ea |title=Merge git://git.infradead.org/users/willy/linux-nvme |date=2012-01-18 |access-date=2013-11-05 |website=kernel.org}} Linux kernel supports NVMe Host Memory Buffer{{cite journal |title=HMB in DRAM-less NVMe SSDs: Their usage and effects on performance |year=2020 |pmc=7051071 |last1=Kim |first1=K. |last2=Kim |first2=T. |journal=PLOS ONE |volume=15 |issue=3 |pages=e0229645 |doi=10.1371/journal.pone.0229645 |pmid=32119705 |bibcode=2020PLoSO..1529645K |doi-access=free}} from version 4.13.1{{cite web |url=https://kernelnewbies.org/Linux_4.13#Storage |title=Linux 4.13 has been released on Sun, 3 Sep 2017. |access-date=16 October 2021 |archive-date=29 October 2017 |archive-url=https://web.archive.org/web/20171029120937/https://kernelnewbies.org/Linux_4.13#Storage |url-status=live }} with default maximum size 128 MB.{{cite web |url=https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/nvme/host/pci.c?h=v4.13.1 |title=Pci.c « host « nvme « drivers - kernel/Git/Stable/Linux.git - Linux kernel stable tree |access-date=2021-10-16 |archive-date=2021-10-16 |archive-url=https://web.archive.org/web/20211016153338/https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/nvme/host/pci.c?h=v4.13.1 |url-status=live }} Linux kernel supports NVMe Zoned Namespaces start from version 5.9.

; macOS

: Apple introduced software support for NVM Express in Yosemite 10.10.3. The NVMe hardware interface was introduced in the 2016 MacBook and MacBook Pro.{{cite web |url=http://www.macrumors.com/2015/04/11/nvme-mac-os-x/ |title=Faster 'NVM Express' SSD Interface Arrives on Retina MacBook and OS X 10.10.3 |work=macrumors.com |date=11 April 2015 |access-date=11 April 2015 |archive-date=23 August 2017 |archive-url=https://web.archive.org/web/20170823073749/https://www.macrumors.com/2015/04/11/nvme-mac-os-x/ |url-status=live }}

; NetBSD

: NetBSD added support for NVMe in NetBSD 8.0.{{cite web |url=http://man.netbsd.org/nvme.4 |website=NetBSD manual pages |title=nvme -- Non-Volatile Memory Host Controller Interface |date=2021-05-16 |access-date=2021-05-16}} The implementation is derived from OpenBSD 6.0.

; OpenBSD

: Development work required to support NVMe in OpenBSD has been started in April 2014 by a senior developer formerly responsible for USB 2.0 and AHCI support.{{cite web |url=http://bxr.su/OpenBSD/sys/dev/ic/nvme.c |website=BSD Cross Reference |author=David Gwynne |title=non volatile memory express controller (/sys/dev/ic/nvme.c) |date=2014-04-16 |access-date=2014-04-27 |archive-date=2014-04-28 |archive-url=https://web.archive.org/web/20140428064533/http://bxr.su/OpenBSD/sys/dev/ic/nvme.c |url-status=live }} Support for NVMe has been enabled in the OpenBSD 6.0 release.{{cite web |url=http://man.openbsd.org/OpenBSD-current/man4/nvme.4 |website=OpenBSD man page |author=David Gwynne |title=man 4 nvme |date=2016-04-14 |access-date=2016-08-07 |archive-date=2016-08-21 |archive-url=https://web.archive.org/web/20160821152138/http://man.openbsd.org/OpenBSD-current/man4/nvme.4 |url-status=live }}

; OS/2

: Arca Noae provides an NVMe driver for ArcaOS, as of April, 2021. The driver requires advanced interrupts as provided by the ACPI PSD running in advanced interrupt mode (mode 2), thus requiring the SMP kernel, as well.{{cite web |url=https://www.arcanoae.com/wiki/nvme/ |website=Arca Noae wiki |title=NVME |date=2021-04-03 |publisher=Arca Noae, LLC |access-date=2021-06-08}}

;Solaris

: Solaris received support for NVMe in Oracle Solaris 11.2.{{cite web |url=https://docs.oracle.com/cd/E36784_01/html/E36884/esc-nxge-7d.html |title=nvme(7D) |publisher=Oracle |access-date=2014-12-02 |archive-date=2015-12-09 |archive-url=https://web.archive.org/web/20151209002431/http://docs.oracle.com/cd/E36784_01/html/E36884/esc-nxge-7d.html |url-status=live }}

; VMware

: Intel has provided an NVMe driver for VMware,{{cite web |url=https://downloadcenter.intel.com/download/23929/Intel-Solid-State-Drive-Data-Center-Family-for-NVMe-Drivers |title=Intel Solid-State for NVMe Drivers |date=2015-09-25 |access-date=2016-03-17 |website=intel.com |archive-date=2016-03-25 |archive-url=https://web.archive.org/web/20160325235340/https://downloadcenter.intel.com/download/23929/Intel-Solid-State-Drive-Data-Center-Family-for-NVMe-Drivers |url-status=live }} which is included in vSphere 6.0 and later builds, supporting various NVMe devices.{{cite web |url=http://www.vmware.com/resources/compatibility/vcl/result.php?search=NVMe&searchCategory=all |title=VMware Compatibility Guide for NVMe devices |access-date=2016-03-17 |website=vmware.com |archive-date=2016-03-25 |archive-url=https://web.archive.org/web/20160325054537/http://www.vmware.com/resources/compatibility/vcl/result.php?search=NVMe&searchCategory=all |url-status=live }} As of vSphere 6 update 1, VMware's VSAN software-defined storage subsystem also supports NVMe devices.{{cite web |url=https://blogs.vmware.com/virtualblocks/2015/11/11/vsan-now-supporting-nvme-devices/ |title=VSAN Now Supporting NVMe Devices |date=2015-11-11 |access-date=2016-03-17 |website=vmware.com |archive-date=2016-03-25 |archive-url=https://web.archive.org/web/20160325132909/https://blogs.vmware.com/virtualblocks/2015/11/11/vsan-now-supporting-nvme-devices/ |url-status=live }}

; Windows

: Microsoft added native support for NVMe to Windows 8.1 and Windows Server 2012 R2.{{cite web |url=http://www.myce.com/news/windows-8-1-to-support-hybrid-disks-and-native-nvme-driver-68663/ |title=Windows 8.1 to support hybrid disks and adds native NVMe driver |website=Myce.com |date=2013-09-06 |access-date=2014-01-11 |archive-date=2014-01-10 |archive-url=https://web.archive.org/web/20140110200352/http://www.myce.com/news/windows-8-1-to-support-hybrid-disks-and-native-nvme-driver-68663/ |url-status=live }} Native drivers for Windows 7 and Windows Server 2008 R2 have been added in updates.{{cite web |url=http://support.microsoft.com/kb/2990941/en-us |title=Update to support NVM Express by using native drivers in Windows 7 or Windows Server 2008 R2 |publisher=Microsoft |date=2014-11-13 |access-date=2014-11-17 |archive-date=2014-11-29 |archive-url=https://web.archive.org/web/20141129141731/http://support.microsoft.com/kb/2990941/en-us |url-status=live }} Many vendors have released their own Windows drivers for their devices as well. There are also manually customized installer files available to install a specific vendor's driver to any NVMe card, such as using a Samsung NVMe driver with a non-Samsung NVMe device, which may be needed for additional features, performance, and stability.{{Cite web |url=https://www.win-raid.com/t29f25-Recommended-AHCI-RAID-and-NVMe-Drivers.html |title=Recommended AHCI/RAID and NVMe Drivers |date=10 May 2013 |access-date=19 February 2021 |archive-date=24 February 2021 |archive-url=https://web.archive.org/web/20210224231432/https://www.win-raid.com/t29f25-Recommended-AHCI-RAID-and-NVMe-Drivers.html |url-status=dead }}

: Support for NVMe HMB was added in Windows 10 Anniversary Update (Version 1607) in 2016. In Microsoft Windows from Windows 10 1607 to Windows 11 23H2, the maximum HMB size is 64 MB. Windows 11 24H2 updates the maximum HMB size to 1/64 of system RAM.https://nvmexpress.org/wp-content/uploads/03_Lee_Windows-Windows-Driver_Final.pdf

: Support for NVMe ZNS and KV was added in Windows 10 version 21H2 and Windows 11 in 2021.{{Cite web |last=lorihollasch |date=2023-08-09 |title=NVMe Feature and Extended Capability Support - Windows drivers |url=https://learn.microsoft.com/en-us/windows-hardware/drivers/storage/stornvme-feature-support |access-date=2024-04-11 |website=learn.microsoft.com |language=en-us}} The OpenFabrics Alliance maintains an open-source NVMe Windows Driver for Windows 7/8/8.1 and Windows Server 2008R2/2012/2012R2, developed from the baseline code submitted by several promoter companies in the NVMe workgroup, specifically IDT, Intel, and LSI.{{cite web |title=Windows NVM Express |work=Project web site |url=http://www.openfabrics.org/resources/developer-tools/nvme-windows-development.html |access-date=September 18, 2013 |url-status=dead |archive-url=https://web.archive.org/web/20130612081416/https://www.openfabrics.org/resources/developer-tools/nvme-windows-development.html |archive-date=June 12, 2013}} The current release is 1.5 from December 2016.{{Cite web |url=https://svn.openfabrics.org/svnrepo/nvmewin/releases/ |title=Nvmewin - Revision 157: /Releases |access-date=2016-08-13 |archive-date=2017-05-10 |archive-url=https://web.archive.org/web/20170510114242/https://svn.openfabrics.org/svnrepo/nvmewin/releases/ |url-status=dead}} The Windows built-in NVMe driver does not support hardware acceleration; hardware acceleration required vendor drivers.{{Cite web |last=lorihollasch |title=StorNVMe Command Set Support - Windows drivers |url=https://learn.microsoft.com/en-us/windows-hardware/drivers/storage/stornvme-command-set-support |access-date=2025-04-11 |website=learn.microsoft.com |language=en-us}}

Software support

; QEMU

: NVMe is supported by QEMU since version 1.6 released on August 15, 2013.{{cite web |url=http://wiki.qemu.org/ChangeLog/1.6 |title=ChangeLog/1.6 |work=qemu.org |access-date=21 March 2015 |archive-date=29 September 2018 |archive-url=https://web.archive.org/web/20180929194541/https://wiki.qemu.org/ChangeLog/1.6 |url-status=live }} NVMe devices presented to QEMU guests can be either real or emulated.

; UEFI

: An open source NVMe driver for UEFI called NvmExpressDxe is available as part of EDKII, the open-source reference implementation of UEFI.{{cite web |url=https://sourceforge.net/projects/edk2/files/EDK%20II%20Releases/other/NvmExpressDxe-alpha.zip/download |title=Download EDK II from |website=SourceForge.net |access-date=2014-01-11 |archive-date=2013-12-31 |archive-url=https://web.archive.org/web/20131231003656/http://sourceforge.net/projects/edk2/files/EDK%20II%20Releases/other/NvmExpressDxe-alpha.zip/download |url-status=live }}

Management tools

= nvmecontrol =

The nvmecontrol tool is used to control an NVMe disk from the command line on FreeBSD. It was added in FreeBSD 9.2.{{cite web |title=NVM Express control utility |date=2018-03-12 |url=https://www.freebsd.org/cgi/man.cgi?query=nvmecontrol&sektion=8&manpath=freebsd-release-ports |publisher=The FreeBSD Project |access-date=2019-07-12}}

= nvme-cli =

NVM-Express user space tooling for Linux.{{cite web |title=GitHub - linux-nvme/nvme-cli: NVMe management command line interface. |date=2019-03-26 |url=https://github.com/linux-nvme/nvme-cli |publisher=linux-nvme|access-date=2019-03-27}}

See also

References

{{Reflist}}