This article is part of the Technology Insight series, made possible with funding from Intel.
It never made sense to me that SSDs held on to the 2.5” form factor borrowed from mechanical disks for as long as they did. NAND chips — the non-volatile memory technology that stores data inside of SSDs — offer versatility and resilience against operating shock that spinning platters can only dream of. They’re almost completely unconstrained by physical dimensions. Really, the only reason to continue packing solid-state storage into hard drive enclosures is legacy compatibility.
The reality is that new form factors are downright disruptive, and nobody wants to fight an uphill battle for adoption unless there’s an exceptional reason for it. While today’s SSD market is more diverse, including PCI Express add-in cards and motherboard-mounted M.2 drives, dense storage servers still utilize 2.5” bays.
The Enterprise & Datacenter SSD Form Factor (EDSFF) is poised to change that. The availability of first-wave storage products based on this hot design specification promises disruptive, screamingly fast new possibilities for high-performance computing. Think big data servers, AI, high-end gaming, and other data-or graphics-intensive applications where large amounts of fast memory and storage are advantageous.
Now that we’ve introduced EDSFF and explained how it changes datacenter storage, it’s time to look at the form factor’s practical applications. Get ready to learn about:
- The first EDSFF-based SSDs already available
- 1U servers and JBOF (just a bunch of flash) enclosures with 32 EDSFF drive bays across the front
- How EDSFF will evolve, including support for PCIe 4.0 and 5.0, plus wider links for additional throughput
Remind me: What is EDSFF? And why do I care?
With measurements borrowed from Intel’s “ruler,” the specification is named for its long, narrow shape. EDSFF makes it possible to lay out NAND chips on a PCB sized to maximize capacity and heat transfer, rather than cramming them into an enclosure meant for disks. That translates to more terabytes per rack unit for applications that can’t get enough capacity, and lower operating temperatures in dense server systems. The new approach optimizes airflow and provides an unprecedented opportunity to facilitate on-demand, disaggregated storage for heavy-compute and GPU-bound workloads.
The new form factor sheds compatibility with legacy drive sleds in the front of rack-mounted systems. It represents a fresh start for flash memory in the datacenter, thanks to the latest generation of lower-cost QLC NAND, plus higher-capacity drives that deliver more storage in less physical space. And it’s ready to rock.
EDSFF’s readiness might come as a surprise to folks familiar with its proprietary beginnings. However, an industry-wide call for a flexible, flash-optimized form factor expedited the design’s evolution. What we have now is a physical standard built to unlock the potential of NVMe—a high-performance interface for attaching non-volatile memory to the PCI Express bus—through a common connector. It’s backed by 15 industry heavy hitters, ranging from flash manufacturers to ecosystem enablers and solution providers.
Taking a leadership role in next-gen storage
Because EDSFF is based on Intel’s proprietary “ruler” form factor, we weren’t surprised to see the company listed alongside Dell EMC, Facebook, HPE, Lenovo, Microsoft, and Samsung as EDSFF promotors. Together with eight EDSFF contributors (Amphenol, Foxconn Interconnect Technology, Micron, Molex, Seagate, TE Connectivity, Toshiba Memory, and Western Digital), the form factor has strong support at every stage of the supply chain.
When Intel first started talking about the ruler back in 2017, the same year as first Optane availability, its goal was to fit one petabyte of data into a 1U platform. Now that the form factor is standardized under EDSFF, and the ecosystem of drives, connectors, servers, and solutions exists as products available for sale, it’s time to look at how EDSFF’s aspirations translate to real-world storage.
Meet the first of its kind
Because the ruler and EDSFF Long (E1.L) form factors are nearly identical, it makes sense that Intel is first out the gate with a compatible product family. Currently, the company’s SSD D5-P4326 series drives are available in U.2 and E1.L form factors at capacities of up to 15.36TB using four-lane PCIe 3.1 NVMe interfaces. Both versions employ QLC 3D NAND, allowing each memory cell to store 33% more bits than Intel’s previous-generation flash.
While EDSFF’s size and shape play a starring role in the standard’s ability to pack lots of storage into small spaces, don’t underestimate the significance of QLC NAND in making Intel’s SSD D5-P4326 possible. It’s the fundamental building block that allows 15.36TB of capacity to work equally well in two different form factors. Because QLC-based SSDs cost less upfront than their predecessors, there’s a good economic case for replacing hard drives with them. Less power consumed through comparable workloads, reduced cooling costs, and lower annualized failure rates all factor into the SSD D5-P4326’s TCO advantage over mechanical storage.
Performance comparisons between the two storage types aren’t even fair. Whereas the fastest enterprise hard drives may sustain transfer rates as high as 300 MB/s, the SSD D5-P4326 can read data sequentially at up to 3,200 MB/s and write at 1,600 MB/s over its PCIe x4 link. Executing a data retrieval operation happens in as little as 135 microseconds, versus the 2-millisecond average latency of a 15,000 RPM disk. In short, the SSD gets to your information faster and moves it more quickly through a larger pipe. A 10x speed boost is especially useful in applications currently limited by storage performance. And in a world facing mountains of big data for real-time processing, keeping CPUs fed with fresh bits is the name of the game.
It’s worth mentioning that all of the SSD D5-P4326’s vital specs are shared between the E1.L and U.2 form factors. Why favor the EDSFF one, then?
To begin, Intel says there will be a 30.72TB model later in 2019, delivering on the promises of higher capacity from EDSFF. Second, Intel’s own testing shows that a server built to support EDSFF can cool its drives to the same temperature using up to 55% less airflow than 2.5” U.2 SSDs. Slower-spinning fans make less noise, use less power, and cost less to operate. So, regardless of whether you’re choosing between the two form factors at similar capacities or targeting the highest density possible, Intel’s E1.L SSD D5-P4326 is the smart choice for new builds.
No doubt we’ll see other members of the EDSFF working group announce compatible SSDs of their own. Western Digital, for instance, recently started talking about its E1.L Ultrastar DC SN640, featuring capacities as high as 30.72TB using 96-layer BiCS4 NAND and its own in-house NVMe controller technology. The 2.5” U.2 version caps out at 7.68TB, making a switch to EDSFF for dense storage servers even more compelling. Micron, Samsung, Seagate, and Toshiba are sure to follow.
EDSFF-compatible platforms are here, too
New form factors are massively disruptive to established ecosystems. They require changes at every turn. Fortunately, Supermicro already had a 1U server designed for Intel’s ruler form factor, so when EDSFF was finalized with very slight modifications, it didn’t take long for Supermicro to tweak the connectors and officially support EDSFF as well. In fact, if you put a picture of the SSG-1029P-NEL32R system next to its ruler-based predecessor, they look identical.
Most striking is how well 32 EDSFF drives fit across the front. “The pitch here is 12 millimeters,” said Michael Scriber, senior director of server solution management at Supermicro. “So you end up with two and a half millimeters of gap for air to flow between each drive.”
Aside from its two drive sleds with room for 16 E1.L SSDs per sled, the SSG-1029P-NEL32R’s specs read like many other 1U servers. It supports a pair of Intel Xeon Scalable processors, up to 6TB of DDR4 across 24 DIMM slots, and two M.2 slots to host boot drives. A couple of PCIe x16 slots out the back side can take 100 Gb/s network adapters. Or, you can stick with the 10 GbE controllers built onto Supermicro’s motherboard.
There’s also a switch to turn 64 lanes of PCIe connectivity from Intel’s CPUs into 128 lanes for the EDSFF drives (four lanes for each of 32 links). The decision to keep that a 2:1 ratio, rather than multiplexing fewer host-facing PCIe lanes, preserves balance across the storage subsystem. Even under heavy load, the switch doesn’t become a bottleneck. Case in point, Scriber says his team has seen 13,000,000 IOPS and 57 GB/s of bandwidth from the ruler server using Intel SSDs.
Supermicro is also working on a JBOF (just a bunch of flash) version called the SSG136R-NEL32JBF that doesn’t have any CPUs or memory. Instead, it pipes 64 lanes of PCIe connectivity out the back using mini-SAS HD ports. Those ports can map to two, four, or as many as eight hosts using an external PCIe x8 cable to a host interface card of Supermicro’s own design. And because it’s still using PCIe, performance remains exceptional. “We’ve measured 52 GB/s out the back of our 1U JBOF system,” Scriber said.
The idea of JBOF enclosures attached to multiple hosts is especially interesting in datacenter applications where compute horsepower, DRAM performance, and storage resources don’t always scale independently. Separate boxes filled with CPU cores, add-in accelerators, and solid-state memory make it easier for enterprise customers to grow when and where their workloads require. EDSFF makes it easier to get more capacity into less space, which is great for deploying on-demand disaggregated storage.
Intel’s SSD D5-P4326 and Supermicro’s SSG-1029P-NEL32R are excellent examples of how EDSFF builds on the flexibility of flash memory to increase storage density and improve thermal efficiency. Further out, we’ve already seen some of the ways that the standard will promote innovation.
For example, Supermicro’s upcoming BigTwin E1.S packs four server nodes into 2U of rack space. Each node supports two Xeon Scalable CPUs, up to 6TB of DDR4 (including Optane DC persistent memory), two M.2 SSDs, and 10 of the shorter E1.S drives, each with up to 4TB of NAND. That’s incredible compute and storage performance from a compact platform.
In a nod to EDSFF’s forward-thinking design, its connector supports PCIe 4.0 and 5.0 transfer rates. It’s also scalable beyond the four-lane link used by Intel’s SSD D5-P4326. E2 (x8) and E3 (x16) connector specs effectively double and quadruple available bandwidth, and options for wider enclosures make it possible to dissipate up to 70W of heat. That opens the door to PCIe-attached compute, networking, and storage accelerators operating side-by-side with high-capacity SSDs.
If you’re taking in and processing lots of data, particularly in real-time, a shift to EDSFF-based infrastructure should pay dividends in lower TCO, greater serviceability, and future expansion that just isn’t available from servers built around 2.5” disks.