Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article is part of the Technology Insight series, made possible with funding from Intel.
A couple of years back, IDC predicted that by 2025 the average person will interact with connected devices 4,800 times per day. Information pouring in from those sensors will fuel machine learning, language processing, and artificial intelligence, all requiring fast storage and more compute horsepower. The next generation of memory technologies will address gaps in today’s storage hierarchy, delivering data where it’s needed for real-time processing.
Emerging memory technologies promise to keep voluminous data closer to processors without the high cost or power consumption of SRAM and DRAM. Most are non-volatile, like the NAND flash inside of SSDs, and dramatically faster than NVMe-attached solid-state drives.
In this first of a two-part series, we’ll look at three technologies with answers to the impending big-data bottleneck: Intel’s Optane, two types of magneto-resistive RAM (MRAM), and resistive random-access memory (ReRAM). Part two will cover nanotube RAM, ferroelectric RAM, and phase-change memory.
Key benefits of new memory technology
- Intel Optane DC persistent memory: Non-volatile, high-capacity memory tuned for data center workloads. Can be accessed through memory operations or as block storage.
- MRAM: Non-volatile memory that can be powered down completely, then awakened quickly for fast writes in an IoT application.
- ReRAM: Promises to bridge the gap between DRAM and flash in the datacenter. Storing entire databases in fast, non-volatile ReRAM would revolutionize in-memory computing.
Setting the stage for big data
Here’s the problem: computational performance is increasing at a pace unmatched by data access technologies. When massively parallel CPUs or purpose-built accelerators run out of ultra-fast cache or speedy system memory, they’re forced to dip into slow, disk-based storage for bytes to crunch on, and grind to a (relative) halt. Larger SRAM caches help keep hot data close at hand, and copious DRAM works wonders for in-memory computing. However, both types of storage are expensive to procure. They’re also volatile by nature, requiring constant power to retain data. Adding more of either just isn’t an economical way to address the sheer volume of data awaiting real-time analysis.
Rob Crooke, senior vice president and general manager of Intel’s non-volatile memory solutions group, sums up the basic challenge this way: “DRAM is not big enough to solve today’s problem of real-time data analysis—and traditional storage isn’t fast enough.”
The company’s Optane technology fits into a growing gap between system memory and flash-based solid-state drives, potentially supercharging analytics, artificial intelligence, and content delivery networks. DRAM is great for in-memory processing, but it’s also limited in capacity. SSDs cost a lot less per gigabyte as they scale into massive deployments. They just don’t have the performance for real-time transactional operations. Optane was designed to bridge those two worlds.
Optane employs a unique architecture made up of individually addressable memory cells stacked in a dense, three-dimensional matrix. Intel doesn’t get specific about the technology at play in its Optane-based devices. However, we do know that Optane can either act like DRAM or an SSD, depending on its configuration.
Intel’s Optane DC persistent memory drops into a standard DIMM slot connected to a CPU’s memory controller. Available in capacities of up to 512GB, it can hold several times more data than the largest DDR4 module. The information on an Optane DC persistent memory DIMM operating in App Direct Mode is retained when the power goes out. In contrast, volatile memory technologies like DRAM lose data quickly if they aren’t constantly refreshed. Software does need to be optimized for Intel’s technology. However, the right tweaks allow performance-bound applications to access Optane DC persistent memory with low-latency memory operations,.
Alternatively, the DIMMs can be used in Memory Mode, where they coexist with volatile memory to expand capacity. Software doesn’t need to be rewritten to deploy Optane DC persistent memory in Memory Mode.
The technology can also be used in what Intel calls Storage Over App Direct Mode, where persistent memory address space becomes accessible through standard file APIs. Applications expecting block storage can access the App Direct region of Optane DC persistent memory modules without any special optimizations. The benefit is higher performance compared to moving data over the I/O bus.
Regardless of how applications use Optane DC persistent memory, the technology’s strengths remain the same: capacity, performance, and persistence. Datacenter apps with large memory footprints (think cloud and infrastructure-as-a-service) are direct beneficiaries. The same goes for in-memory databases, storage caching layers, and Network Function Virtualization.
MRAM shows promise at the edge
Whereas Optane is mostly being aimed at the datacenter, magneto-resistive RAM, or MRAM, shows promise across a range of IoT devices—the very sensors that IDC says we’ll soon be touching thousands of times a day.
Consider this example from a blog post by Dr. Mahendra Pakala, managing director of Applied Materials’ memory group. It uses a security camera with voice and facial recognition as an example of where MRAM works well. You want that camera to process as much data as possible at the edge, and only upload information that matters to the cloud. Power consumption is paramount, however. According to Dr. Pakala, today’s edge devices primarily employ SRAM memory, which uses up to six transistors per cell and can suffer high active leakage power, hurting their efficiency. “As an alternative, MRAM promises several times more transistor density, enabling higher storage densities or smaller die sizes.” Greater capacity, more compact chips, and less power consumption sounds like a win for anyone processing at the edge.
Data in MRAM is stored by magnetic elements formed from a pair of ferromagnetic plates, separated by a thin dielectric tunneling insulator. One plate’s polarity is set permanently, while the other’s magnetization changes to store zeroes and ones. Together, the plates form a magnetic tunnel junction (MTJ). These become the memory device’s building blocks.
Like Optane DC persistent memory, MRAM is non-volatile. Everspin Technologies, one of the leaders in MRAM technology, says data stored in its Toggle MRAM lasts for 20 years at temperature. MRAM is incredibly fast, too. Everspin claims simultaneous read/write latency in the 35ns range. That’s close to the vaunted performance of SRAM, making MRAM an attractive substitute for almost any of today’s volatile memories.
Density is where classic MRAMs fall short of DRAM and flash memory. Everspin recently announced a 32Mb device. But in comparison, the largest four-bit-per-cell NAND parts offer 4Tb densities. All the more reason for MRAM to excel in IoT and industrial applications, where its performance, persistence, and unlimited endurance more than make up for a lack of capacity.
Spin-transfer torque (STT-MRAM) is a variation of the magneto-resistive technology that works by manipulating electron spin with a polarizing current. Its mechanism requires less switching energy than Toggle MRAMs, bringing power consumption down. STT-MRAM is also more scalable. Everspin’s standalone devices are available in 256Mb and 1Gb densities. A company like Phison can drop one of them next to its flash controller and get amazing caching performance with the added benefit of power-loss protection. You wouldn’t need to worry about buying SSDs with built-in battery backup. Data in-flight would always be safe, even in the event of an unexpected shut-down.
Foundries like Intel, TSMC, and UMC are interested in STT-MRAM for another purpose: they want to embed it in their microcontrollers. The NOR flash currently used in those designs has a hard time scaling to smaller manufacturing nodes, while MRAM is more economical to integrate. In fact, Intel already presented a paper showing off a production-ready 7.2Mb MRAM array integrated with its 22nm FinFET Low Power process. The company says that MRAM as embedded non-volatile memory is a potential solution for IoT, FPGAs, and chipsets with on-chip boot data requirements.
ReRAM may be the answer for in-memory computing
A few months after announcing its success integrating MRAM with 22FFL manufacturing, Intel gave a presentation at the International Solid-State Circuits Conference describing a 3.6Mb resistive random-access memory (ReRAM) macro embedded with the same process node.
ReRAM is another type of non-volatile memory touting low power, high density, and a performance profile that puts it in between DRAM and flash-based storage. But whereas MRAM’s characteristics foretell a life among IoT devices, ReRAM is being groomed for a datacenter career, bridging the gap between server memory and SSDs.
Several companies are developing ReRAM, using a variety of materials. Crossbar’s ReRAM technology, for example, employs a silicon-based switching material sandwiched between top and bottom electrodes. When voltage is applied between the electrodes, a nanofilament is formed in the dielectric, creating a low-resistance path. The filament can then be reset by another voltage. Intel uses a tantalum oxide high-κ dielectric under an oxygen exchange layer, creating vacancies between its electrodes. The two cells differ in composition, but perform the same function, delivering many-times-faster read and write performance compared to NAND flash.
Applied Materials’ Dr. Pakala said ReRAM appears to be the most viable memory technology for in-memory computing, where data is held in RAM rather than in databases on disk. “Matrix multiplication can be done within the arrays by utilizing Ohm’s Law and Kirchoff’s Rule—without moving weights in and out of the chip. The multilevel cell architectures promise new levels of memory density that can allow much larger models to be designed and used.” It’s prohibitively expensive to work on those models in DRAM, which is why the cost benefits of ReRAM look so promising here.
The best is yet to come
From the factory floor to the datacenter, fully utilizing compute resources without breaking the bank requires a fresh approach to storage. Energias Market Research expects the market for MRAM to grow rapidly between now and 2025, reaching $1.2 billion after a compound annual growth rate of 49.6%. Coughlin Associates predicts that 3D XPoint memory—the technology at the heart of Optane—will drive revenues to over $16 billion by 2028. Clearly, there’s a demand for new memories that address the impending limits of flash memory, DRAM, and SRAM.
There doesn’t have to be just one winner, either. It’s possible that all three of these emerging memory types coexist at various levels of the storage hierarchy with a common goal: to make sure the impending deluge of data doesn’t overwhelm existing access technologies.
Intel’s Optane DC persistent memory is already prolific in servers with second-gen Xeon Scalable Processors. MRAM is being used alongside SSD controllers for write caching in place of DRAM. And ReRAM is more viable than ever thanks to Applied Materials’ Endura Impulse PV high-volume manufacturing system. If you’re serious about processing massive amount of data, the next five years are going to be critical. Now’s the time to start weighing your options.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.