Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Today, D-Matrix, a company focused on building accelerators for complex matrix math supporting machine learning, announced a $44 million series A round. Playground Global led the round with support from Microsoft’s M12 and SK Hynix. The three join existing investors Nautilus Venture Partners, Marvell Technology and Entrada Ventures.
The money will be used to bring Nighthawk, its chiplet-based architecture for faster complex matrix calculations, to markets like data centers. One major anticipated market is machines that need faster inference engines for deploying machine learning models. D-Matrix is also designing a follow-up to be called the Jayhawk.
The Nighthawk chip promises to bring computational ability closer to the data. In traditional computers, the information is stored separately in RAM chips and it must be delivered to the CPUs where the decisions are made and the arithmetic is completed before it is stored back in the RAM. This approach, sometimes called the von Neumann architecture, dates back to the earliest digital computers created after World War II.
While this approach has been extremely successful for all the general workloads over the decades, one of the challenges for designers has always been finding a way to speed up the movement of data. The idea of moving more of the transistors used for computation into memory chips has been explored for many years, but it’s never been efficient enough to justify the tradeoffs. General purpose CPU architectures just have too many economies of scale.
“We’re on this journey to create what has now become a digital in-memory computing engine which has the efficiency of in-memory compute and has the accuracy and predictability of a digital computer,” said Sid Sheth, the CEO, president and founder of D-Matrix. “You kind of marry these two and you have created this engine that is ultra-efficient.”
D-Matrix’s chip will be a systolic array that will be optimized for some of the matrix calculations that AI scientists call “transformers.” These models deploy a greater number of connections between elements than the models of the previous generation that were sometimes called “convolutional neural networks.”
Other chip companies are also focusing on transformers. Nvidia, for example, announced recently that its next-generation GPU, called the Hopper H100, will speed up calculations for transformers. Another chip, the Cerebras CS-2, is also focusing on the very large models that characterize approaches like transformers.
In the right market, at the right time
“We started in 2019 and right around that time transformers were beginning to really take off in a big way and people were beginning to realize the potential and the impact the transformers were going to have,” explained Sheth. “Transformers are just going into every conceivable multimodal application, video speech, text documents, search, etc. Everything is now being run using transformers and you know that stuff is going to be with us for the next five to seven, maybe 10 years.”
D-Matrix believes that its new chips will be ideal for deploying some of these transformers. Their in-memory solution can store the model’s matrix full of weights close to the computing engine, so it doesn’t need to be reloaded and reloaded every time the transformer is applied. The same approach could also be applied to many other problems that require large matrix calculations, from simulations to forecasting.
The D-Matrix design relies upon a grid of small computational units built out of just a few transistors. Several of these are combined with one traditional RISC-V CPU to make up the standard building block. A completed machine may have a number of these basic units on one board and the computing power will scale with them.
D-Matrix plans to deliver these boards directly to data centers, which may create instances that offer hybrid architectures for developers who want to use traditional CPUs for general computation but the D-Matrix chip for tasks like evaluating transformer models.
“The data center operators can plug this into their existing servers so they don’t have to really throw out their CPUs and servers or anything like that,” explained Sheth. “Say, I already got these servers. I’m going to plug in your card and when it comes to this AI compute stuff, I’m going to run it on the D-Matrix hardware as opposed to running it on the CPU or the GPU, and guess what? I [get] ten times improvement in efficiency. Now I can use my existing data center without having to worry about throwing stuff out.”
What’s ahead for chips and matrix calculations?
Green AI is likely to be a big focus for the company as it moves forward because it’s a natural partner for the general progress toward building a very fast chip. Smaller transistors and faster chips can use less power to compute the same functions. As AI models are deployed more commonly, chips like D-Matrix’s will be in demand to keep power consumption from exploding.
“Our investment in D-Matrix comes at a time when data around AI workload requirements, running cost and value creation are in much better focus than they have been in recent years,” said Michael Stewart, a partner at M12, Microsoft’s venture fund. “Their clean slate approach is perfectly timed to meet the operational needs of running giant transformers in the composable, scalable data center architecture of the near future.”
The company believes that there also will be many opportunities for more traditional numerical models that also rely heavily on matrix calculations. Forecasting, simulation, testing and design work all depend upon larger and more detailed models to achieve the precision required.
“The hyperscale and edge data center markets are approaching performance and power limits and it’s clear that a breakthrough in AI compute efficiency is needed to match the exponentially growing market,” said Sasha Ostojic, a venture partner at Playground Global. “D-Matrix is a novel, defensible technology that can outperform traditional CPUs and GPUs, unlocking and maximizing power efficiency and utilization through their software stack. We couldn’t be more excited to partner with this team of experienced operators to build this much-needed, no-tradeoffs technology.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.