NVMe over Fabrics: The Evolution of Non-Volatile Storage in the Datacenter

/ / Technologies

by Phil Cayton

NVMe, or NVM Express, is a standard interface connecting hosts to PCIe bus-attached SSDs. It serves to optimize the SSD’s inherent high performance and parallel computing features for data transfers.

Released in June, the NVMe over Fabrics (NVMeoF) specification is an emerging technology that gives datacenter networks unprecedented access to NVMe SSD storage. This new standard enables efficient scaling of NVMe-based solid-state drives (SSDs) over Datacenter Fabrics such Remote Direct Memory Access (RDMA) networks (e.g., IB, iWARP, Intel® Omni-Path Architecture) and Fibre Channel.  This allows for faster, more scalable, connections between servers and storage as well as between storage controllers and NVMe enclosures.

When considered in this context, NVMeoF is a complementary technology to OpenFabrics Software (OFS). They share a similar purpose: to let their sister technologies do what they do best while enabling a low latency throughput. NVMe over Fabrics simply serves that role at a different stage of the data management process—thereby creating an even stronger system overall.

For a deep dive on the NVMe over Fabrics spec and performance previews, I invite you to review my presentation hosted at the 2016 OpenFabrics Alliance Workshop  (slidesvideo). The “CliffsNotes” introduction follows here.

Specification Highlights

The major components packaged in the NVMe over Fabrics spec include:

  • Effective scaling limits of PCIe attached NVMe is 255/system
  • Scaling out of hundreds or thousands of NVMe SSDS as required by datacenters
  • Traditional scale-out translations between hosts and NVMe SSD
  • Extending NVMe and enabling SSD scale-out and low-latency I/O at near local speeds

The specification is not only an enhancement to NVMe technology. It is also an improvement to previous implementations attempting to leverage NVM storage only to bottleneck at the network interconnect level. The spec-defined host and target component’s architecture retains the efficiency of PCIe NVMe access over fabrics.

There is a clear roadmap to open source drivers, ensuring that the transport is Datacenter fabric-agnostic. Currently, the open-source drivers are being tested on InfiniBand, iWARP, RoCE, and Intel® Omni-Path Architecture fabrics.

Although driver performance is still being fine-tuned, minimal latency gains compared to local operations are proven. Possible deployment models include message-based queueing (via encapsulation), fabric subsystem discovery, and queue creation using fabric connect command.

Publicly available, the NVMe over Fabrics specification is maintained by the NVM Express, Inc. The specification’s required host and target driver is available in the upcoming Linux 4.8 kernel under drivers/nvme/{host/target}.