Name-Of, Also Known As Nvme Over Fabrics, Is A Protocol Used To Connect Hosts To Storage In The Network Fabric Using The Nvme Protocol.
This protocol enables the data transfer mechanism between the host and the storage system using an NVMe message-based command. Further, data can be transmitted using Ethernet, Fiber Channel (FC), or InfiniBand.
It should be noted that NVM Express Inc. is a non-profit organization that released version 1.0 of the NVMe protocol in March 2011.
Later, on June 5, 2016, it released version 1.0 under the title NVMe-oF and NVMe version 1.3 in May 2017.
These updates significantly improved solid-state drive (SSD) security, resource sharing, and stability. NVM Express says:
Network equipment vendors are working to develop a mature enterprise ecosystem to enable endpoint devices to support NVMe on fabric. Among these devices, we should mention server operating systems, server hypervisors, network adapter cards, operating systems used in storage equipment, and storage drives.
Senswitch vendors try to use a 32 Gbps Fiber Channel as the logical fabric for NVMe flash in their products. This issue has caused several implementations of this protocol to be introduced to the technology world since the initial development of NVMe-oF, including NVMe-oF based on Remote Direct Memory Access, fiber channel, etc.
What are the uses of NVMe over Fabrics?
Use. NVMe-oF technology is still in its infancy, but network equipment vendors are trying to add it to the network architecture. NVMe-oF can provide network experts with an advanced storage protocol and can fully exceed the power of today’s solid-state memories. It can also bridge the gaps between direct attached storage (DAS) and storage area network (SAN), allowing organizations to best manage workloads by delivering high throughput and low latency.
NVMe connects directly to servers. In this way, NVMe flash cards replace traditional solid-state memory storage media. Compared to technologies such as all-flash storage, this architecture has significant performance advantages but disadvantages. To use NVMe, network experts must employ third-party software solutions to optimize and maintain data write and read operations. Additionally, in NVMe arrays, the storage controller level creates bottlenecks.
NVMe-oF paves the way for rack-scale flash systems to integrate native data management. NVMe-oF is also used to optimize real-time analytics, big data processing, and machine learning. Of course, the degree of acceptance and speed of adoption of this technology depends on the rapid development of the NVMe ecosystem (Figure 1).
figure 1
What are the benefits of NVMe over Fabrics?
The advantages of NVMe-based storage drives are as follows:
- Low latency.
- Additional parallel requests.
- Increase overall performance.
- Reducing the length of server-side operating system storage stacks.
- Storage array performance improvements.
- Its Faster solutions for endpoint devices support SAS, SATA, NVMe SSDs, and more drives and interfaces.
- Implementation of different scenarios.
What are the technical features of NVMe over Fabrics?
Among the essential technical features of NVMe-oF, the following should be mentioned:
- High speed.
- Low latency in networks.
- Credit-Based flow control.
- Ability to scale and support up to thousands of devices.
- Fabric multipath support for simultaneous activation between NVMe host initializer and target media.
- Consent for multiple fabric hosts to send and receive commands from various hosts and storage subsystems simultaneously.
What are the critical differences between NVMe over and Fabrics NVMe?
NVMe is an alternative to the Small Computer System Interface (SCSI) standard for connecting and transferring data between a host and a storage device or system. In general, we must say that NVMe is designed for use with fast media, such as solid-state memories and technologies based on Post-Flash Memory.
The NVMe standard has a big difference compared to the SCSI and SATA protocols developed for storage media, such as hard disks, making the time to access information several times faster.
NVMe supports 64,000 queues, which may be up to 64,000 commands deep. All I/O instructions and responses are executed on a single processor core, giving multi-core processors a high level of parallelism.
figure 2
NVMe-based devices transfer data using a PCIe serial slot; a dedicated hardware controller is not required to route network storage traffic. Using NVMe, a PCIe solid-state memory that can be installed directly on the server motherboard, it is possible to transfer data at a higher speed for the storage device or subsystem.
One of the main differences between NVMe and NVMe-oF is the Transport-Mapping mechanism for sending and receiving commands or responses. NVMe-oF uses a message-based model to communicate between a host and a target storage device. NVMe-oF locally maps commands and responses to shared memory on the host via the PCIe interface protocol.
NVMe over Fabrics based on Remote Direct Memory Access (RDMA)
NVMe-oF, based on remote direct memory access, was developed by a working group in the NVM Express organization. Direct remote memory access is a memory-to-memory transfer mechanism between two computers. Data is sent from one memory address space to another without calling the operating system or the processor. In this case, there is less overhead and the access and response time to conversations becomes faster in microseconds. Available mappings include RDMA over Converged Ethernet (RoCE) and Internet Wide Area RDMA Protocol for Ethernet and InfiniBand.
NVMe is a protocol for transporting storage traffic in RDMA across textures. This issue has led us to describe the above protocol as the common language of computing and storage servers that are supposed to exchange data with each other. To take advantage of the RDMA-based NVMe-oF index, a new storage network needs to be implemented to improve performance. This issue makes the scalability of the above technology less compared to the fiber channel protocol.
NVMe Over Fabrics using Fiber Channel
NVMe-oF was developed using FC-NVMe by the T11 Committee as part of the Information Technology Standards (INCITS). Fiber Channel allows mapping other protocols, such as NVMe, SCSI, and IBM’s proprietary Fiber Connectivity (Fiction), to send data and commands between host and target storage devices.
FC-NVMe and Gen 6 FC can be seamlessly used in the same infrastructure to overcome incompatibility issues in data centers. In this case, customers can easily upgrade the firmware of Fiber Channel switches, provided that the host bus adapters (HBAs) support NVMe-oF-capable storage equipment and Fiber Channel at 16 Gbps or 32 Gbps.
Interestingly, the Fiber Channel protocol supports NVMe shared flash access, but in this case, we will see performance degradation for interpreting and translating encapsulated SCSI commands to NVMe commands.
In this regard, the FCIA Association succeeded in developing standards for implementing FC-NVMe compatible with legacy infrastructure so that an FC-NVMe adapter can support SCSI-based media, solid-state drives, and PCIe-connected NVMe flash cards.
NVMe over Fabrics using TCP/IP
One of the critical developments regarding NVMe-oF is the development of this protocol using TCP/IP. In this case, NVMe-oF can use the TCP protocol to send data. NVMe over the TCP protocol enables the use of NVMe-oF in a standard Ethernet network. In addition, the NVMe-oF TCP/IP architecture eliminates the need to make changes to the configuration or deployment of specific equipment.
Since the communication channel can be used on any Ethernet or Internet network, we will no longer have to worry about implementing and configuring special equipment.
The TCP protocol is a well-known standard for establishing and maintaining connections when exchanging data over a network. TCP works in interaction with the IP protocol. Given that both protocols play an important role in communicating local networks with each other, the Internet, and private networks, they significantly improve the performance of the NVMe-oF protocol.
The TCP binding defines how queues, capsules, and data are mapped and supports TCP communication channels between NVMe-oF hosts and controllers over IP networks.
NVMe-oF using TCP/IP is a good option for organizations that want to leverage their Ethernet infrastructure. This implementation method allows developers to decouple NVMe technology from iSCSI. For example, an organization that chooses not to face the potential problems of implementing NVMe on textures using RDMA can use TCP/IP-based NVMe-oF in the Linux kernel to take full advantage of the above architecture.
Storage industry support for NVMe-oF and NVMe technology
Both legacy storage solution vendors and startups are scrambling to capture a significant share of the technology market. Among the NVMe and NVMe-oF all-flash storage products available in the market, the following should be mentioned:
- DataDirect Networks (DDN) Flashscale
- Datrium DVX hybrid system
- Kaminario K2.N
- NetApp Fabric-Attached Storage (FAS) arrays
- Pure Storage FlashArray
- Tegile IntelliFlash
In December 2017, IBM released a preview of NVMe-oF InfiniBand, combining Power9 and FlashSystem V9000 systems. A product designed to respond to heavy computing workloads.
In 2017, HPE introduced its HPE Persistent Memory Server flash memory along with ProLiant Gen9 servers and NVMe-compatible persistent memory-based PCIe solid-state memory. Dell EMC was one of the first storage vendors to market an all-flash NVMe product, so the company managed to develop a DSSD D5 array with Dell PowerEdge servers and a dedicated NVMe on a PCIe network.
In the meantime, some startups also designed NVMe all-flash arrays that were noticed. Among these products, the following should be mentioned:
- Apeiron Data Systems storage array uses NVMe drives and places data services on digital programmable integrated circuit arrays (FPGAs) instead of servers attached to storage arrays.
- Its Excelero-based software storage can be used with standard servers.
- Pavilion Data Systems has developed an array called Pavilion based on network interface cards, PCIe switches, and standard processors. The company’s device is the U4, which includes 20 storage controllers and 40 Ethernet ports connecting to 72 NVMe SSDs using an internal PCIe switch network.
- Vexata Inc. offers the VX-100 distributed software and the Vexata Active Data Fabric. This software can manage the NVMe array, including a front-end controller, an FPGA-based cut-through router, data nodes, and schedule I/O operations and metadata.
In general, IT equipment vendors have taken a new path in 2017, which is based on NVMe-oF technology.
Network vendors have been waiting for storage vendors to launch high-performance NVMe-oF-based arrays. Significant players in Fiber Channel switches, Brocade and Cisco, have each launched 32 Gbps Gen 6 FC equipment that supports NVMe flash traffic and FC-NVMe textures. In addition, Cavium has also updated its QLogic Gen 6 FC and FastLinQ Ethernet adapters for NVMe-oF.
Among drive manufacturers, Intel leads the way with dual-port NVMe solid-state memories based on 3D NAND and Intel Optane NVMe drives based on XPoint 3D memory technology. Intel claims Optane NVMe drives are nearly eight times faster than NVMe PCIe SSD NAND flash. Seagate has also introduced its Nytro 5000 M.2 NVMe SSD drive and has begun work on 64TB NVMe add-in cards.
last word
Due to the increasing dependence of companies and users on data hosted in cloud services, the demand for information storage in the cloud has grown significantly. This issue led to the emergence of NVMe-oF technology, which many experts believe will revolutionize the future of storage media.
NVMe is becoming increasingly popular due to its fast multitasking performance with low latency and high throughput. While NVMe is used in personal computers to improve video editing, gaming, and more, its real benefit is more evident on an enterprise scale.