Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
This article is part of the Technology Insight series, made possible with funding from Intel.
A petabyte isn’t what it used to be, thanks to this radical new SSD standard. And that’s good news for enterprises and data centers.
Back in 2012, one petabyte of storage occupied a full-sized server cabinet measuring more than six feet tall. It sold for $500,000. The 360 Serial Attached SCSI hard drives you had to cram into that rack needed seven kilowatts of power, and they were only capable of reading data at a little over 500 Mbs. (These days you get better performance, though not capacity, from the SSD in your laptop.)
We’ve come a long way since then. Take Supermicro’s SuperStorage systems. They support up to 32 flash-based drives, include a pair of redundant 1.6kW power supplies, can theoretically move data at a screaming 64 GB/s, and occupy as little as one rack unit (1U). Once Intel starts rolling out 32TB SSDs based on its 3D NAND technology, those Supermicro servers will condense 1PB of capacity into just 1U of space.
The key is the Enterprise & Datacenter SSD Form Factor (EDSFF). The rack-shrinking specification provides a foundation as a building block for modern data centers and has a promising roadmap. In this piece, you’ll learn:
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
- Why EDSFF is necessary and where 2.5″ form factors fall short
- How EDSFF is being implemented today, including the E1.L and E1.S form factors
- Where you can expect the standard to affect the thermal performance, power consumption, and serviceability of your servers
- How EDSFF is expected to evolve over the next several years
What is EDSFF? Why should you care?
Miniaturizing six feet of storage hardware and making it orders-of-magnitude faster is no small feat. It starts with a shift away from spinning platters. The latest solid-state drives employ advanced manufacturing to drive down their price per gigabyte, greatly strengthening the case for replacing hard disks with SSDs. But even then, there’s only so much flash memory you can fit into a legacy 2.5″ form factor. Enter EDSFF.
If you’re in the business of data generation and consumption, EDSFF makes it possible to put more information – your primary currency – into less space by completely redefining the physical dimensions an SSD occupies. And because EDSFF-based drives slide into connectors wired up for four-lane PCI Express links, those valuable bytes get where they’re going faster within the walls of your datacenter.
EDSFF was inspired by Intel’s “ruler” form factor and then backed by 15 different companies to create an industry standard that promises higher storage density, better efficiency through some cool mechanical optimizations, more capacity, and storage disaggregation via high-bandwidth, low-latency connectivity.
Why do we need a new form factor, anyway?
Today’s storage servers are all about packing more capacity into smaller spaces. While 2.5” and 3.5” drives reflect the physical demands of spinning disks, those form factors aren’t specifically relevant to solid-state flash memory. NAND chips can live almost anywhere. And the growing popularity of add-in cards and M.2 SSDs installed onto motherboards demonstrate the advantages of plugging flash directly into the PCI Express bus.
Of course, many SSDs are still built into 2.5” enclosures for compatibility with existing drive sleds. In the enterprise space, a lot of these employ a U.2 interface to plumb solid-state storage into four-lane PCI Express 3.0 links. U.2 is compatible with the legacy SAS connector and adds support for hot-swapping drives from the front of a server. However, the form factor it populates isn’t optimal for dense flash.
Although Supermicro does sell a 1U server with 32 hot-swappable drive bays for U.2 SSDs, the company had to perform some fancy footwork to fit them along the front. Two drive trays host 16 SSDs each. They’re stacked one on top of each other and four-deep, sliding out perpendicular to the server’s front edge as the tray is removed. As you might imagine, cooling SSDs packed so densely isn’t easy. Moreover, the cables, drive cages, and LED controllers add potential failure points.
Eventually, the industry was bound to relax its grip on legacy interfaces and turn to a form factor better suited to the realities of modern storage.
How does EDSFF improve flash in the enterprise?
Before EDSFF could be all things to every IT decision-maker, it needed to satisfy the physical requirements of increasingly diverse storage workloads. Some servers are designed for capacity-oriented applications and dedicate lots of internal room to maximizing the terabytes-per-rack-unit metric. Others prioritize compute horsepower, memory, or expansion space for add-in accelerators. Because EDSFF was conceptualized with flexible flash memory in mind, the form factor is defined in two distinct lengths that share functionality but fit into a variety of profiles.
The first, referred to as E1.L (the L stands for long) is the same shape and size as Intel’s original “ruler” form factor. That means it offers capacity unavailable from any other SSD configuration. An Intel SSD D5-P4326, for example, comes equipped with 15.36TB of 3D NAND rated for sequential reads of up to 3,200 MB/s over a four-lane PCIe 3.0 link. Multiplied out across 32 bays, that’s almost 500TB of high-performance storage from a 1U storage server. A future 30.72TB model will make it possible to flirt with the form factor’s promised 1PB per rack unit.
E1.S – that’s S for short – looks more like the M.2 SSDs used in a lot of today’s notebooks and desktop PCs. It’s a little taller to make more room for flash memory, which allows it to offer more capacity per drive. E1.S is also hot-pluggable, whereas M.2 is not. According to Intel’s EDSFF technology brief, “E1.S provides the best of U.2 and M.2. E1.S is a scalable, flexible, power, and thermally efficient SSD building block. This form factor was designed for high volume hyperscale, and allows system flexibility, increased storage density, modular scaling, improved serviceability, and more efficient cooling optimized for 1U servers.” Naturally, less depth means E1.S drives don’t have as much room for NAND chips. But even a 4TB SSD enables 128TB from a server measuring just 30” front to back.
EDSFF is a big step forward in thermal efficiency
Even with the density advantages they enjoy, both versions of the EDSFF spec are optimized for thermal performance. They’re built around NAND chips, fitting into the most compact space possible to minimize wasted PCB real estate. In a server with spinning disks, or even U.2 SSDs, there’s not much room between drives. Worse, the backplane they connect to stands up vertically, creating an air dam.
“Thermals are one of the most important aspects of a ruler-type design,” noted Michael Scriber, senior director of server solution management at Supermicro. “The biggest challenge of designing a server with 2.5” drives is the backplane they plug into. It cuts across the very front of the server and blocks all the airflow. So, you punch as many holes into that as you possibly can to get air back to the CPUs, DIMMs, and network cards.”
EDSFF solves this with a midplane that lays down flat. Connectors the width of each drive are mounted vertically where they don’t inhibit airflow at all. Air goes right between the SSDs and back through the system. This optimization’s impact can be substantial.
Scriber gave VentureBeat interesting data based on first-hand experience testing form factors. “Because I have U.2 and EDSFF versions of the same server, I can compare their thermals. To be specific, my U.2 server is limited to 140W processors. My ruler server supports 165W CPUs. I can handle the extra heat because the airflow is so much better.”
Even the design of EDSFF drives is optimized for heat dissipation. According to Scriber, every component on the SSD’s PCB uses thermal paste to transfer energy into the aluminum casing, which becomes a large heat spreader.
When you add up EDSFF’s thermal efficiency improvements, Supermicro determined that it takes 55% less airflow, measured in CFM/drive, to maintain the same 37.5°C. That means chassis fans can run more slowly, generate less noise, and use less power. The total cost of ownership running the server drops, as its cooling subsystem doesn’t have to fight to push air through a backplane.
The future of EDSFF
Today’s EDSFF drives employ a x4 connector, yielding a maximum throughput of roughly 4 GB/s over PCI Express 3.0. Better yet, the connector was designed to support PCI Express 4.0 and 5.0 as well, doubling and then quadrupling link performance in the years to come.
Bandwidth can also be multiplied through wider links. A x4 (E1) connector is great for today’s high-speed storage devices. However, EDSFF also supports x8 (E2) and x16 (E3) link configurations. In much the same way as you’d drop a x4 PCIe add-in card into a x16 slot, those wider connectors will have no problem accommodating four-lane SSDs.
The x4 and x8 connectors both fit into one rack unit, while the x16 connector fits into a 2U form factor. As the EDSFF ecosystem evolves, Supermicro’s Scriber expects the emergence of some interesting opportunities based on those dimensions. “I can see where they were planning ahead such that, if I have a 2U box, I could still fit 32 drives right up front. But if I use x16 connectors, I could just as easily slide a network device in that slot, or an FPGA, or a GPU.”
The 16-lane E3 interface offers up to 70W of power, so there’s a limit to EDSFF’s scope. But a 2018 presentation given by Paul Kaler, advanced storage technologist at Hewlett Packard Enterprise, already introduced the concept of “compute in storage,” whereby an E3 slot might fit a mix of graphics and tensor processing units for AI applications.
“What about the folks looking at using accelerators to accelerate storage?” asked Supermicro’s Scriber. “It’d be convenient to put those right in the front of the box along with my storage. There are already people investigating that.” So clearly, there’s a lot more to EDSFF than just SSDs, especially when you start talking about next-gen PCIe and x16 links.
Bottom line: More capacity faster, less power and space
For now, EDSFF gives us a way to pack unprecedented capacity into unbelievably tight spaces using less power at higher performance than ever before. Its benefits are already available through storage servers like Supermicro’s SSG-1029P-NEL32R. But because the form factor connects to a common PCIe physical layer, you can expect a lot more from it in the future.
In Part 2 of this post, we’ll look at EDSFF in action on new, leading-edge products.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.