VentureBeat: Tired of showing up at Black Hat and DefCon and seeing Arm appliances on stage?
Haas: It’s funny. This whole Spectre and Meltdown thing, when Arm was affected by it, there was a bit of, “Oh my gosh, Arm has a bunch of loopholes in their architecture.” But obviously, as you know, it was not about the architecture. It was about the class of bug on high-performance systems. First world problems. There were a lot of things written around RISC V. There are no RISC V products affected by Spectre and Meltdown because there are no RISC V products being used in high-performance systems that need to use cache look-ahead. The fact that we’re impacted by it is a testament that says we’re playing with the big boys. But when you’re playing in that area, all the caveats apply.
From your standpoint, are you seeing any sentiment, feeling any sentiment, that people think the company’s getting too big?
VentureBeat: Only by the emergence or existence of support for something like RISC V. People don’t necessarily want to get locked in. The funny thing is, you guys were always more open than all the other guys, right? But now that you’re bigger, people are saying, “Well, we need something even more open to be the counter-force to the dominant powers.” It’s interesting.
Haas: We’re always listening and making sure that we’re addressing the needs of the market, whether it’s around freedom to operate, freedom to innovate–we take competitive threats pretty seriously.
VentureBeat: Even if there’s a dominant platform that’s viewed as benign, like the iPhone, there’s still always something like Android appearing in the marketplace. And there’s a more clear distinction between closed and open.
Haas: Right. There’s always alternatives.
VentureBeat: On the server side, is there a reason you have more confidence this year or the next year that the market share is going to move?
Haas: I joined Arm about five years ago. When we talked about TechCon back then, “sensors to servers” was the theme. We had a lot of guys building Arm-based silicon at the time. You had Applied Micro. AMD was poking their head in. You had a bunch of players. But the products weren’t all there. The performance wasn’t there. Obviously the software ecosystem wasn’t there.
If I look at the glass as half-empty, there aren’t as many silicon suppliers as their were, but if I look at the glass as half-full–three things. One, the software ecosystem has matured. Two, the product performance is there. Third, even more so than five years ago, the demand is much higher than we thought.
When we talked about this back then, there was the cloud, and then there was classic Windows enterprise. The notion was, Windows enterprise was a big market, but the cloud was where the growth was. But still, not to the level we see now. It seems almost insatiable. Drew gave the example, where we’ll be doing a million units a year, of this network offload engine, which over the last 12-18 months has become a massive opportunity for us.
That’s a function of two things. The compute requirements have gotten so high on the x86 servers that having an x86 compute farm doing all these overhead tasks was costing AWS, or whoever the cloud guy is, compute per dollar revenue. So they convert their compute per dollar. They want to make the compute part as pure as possible, so they need an offload engine, a clean sheet of paper. You want to do that on the most efficient architecture you possibly can. These offload devices are largely based on Arm, which is a big opportunity. Going forward, the opportunity we have is to take over the compute area. I’m probably more bullish now than I was before, just because the end market demand is quite significant. And it’s only going to get bigger.
VentureBeat: On the AI side, are you starting to see any fruit?
Haas: We have a few things starting to cook design-wise that we haven’t talked about publicly, on the ML side. It’s still early days, in terms of people using dedicated accelerators. People are still trying to figure out exactly where the use cases are at the edge. Our point of emphasis is not where Nvidia is, doing big training GPUs in the cloud. We’re very focused on these inference engines at the edge that can do some level of training. But I think we’re just at the cusp where that’s going to be pretty large.
VentureBeat: Kind of like Apple’s machine learning section of its chip that does face recognition?
Haas: Yeah, something similar. It might be in an IOT endpoint that gets smarter in terms of recognizing patterns and behaviors. The Alexa assistant devices are crude first-case examples that have a UI, but you may have other devices that have a different UI, whether it’s relative to ambient temperature or biometrics, doing a bunch of learning. That’s going to be pretty large. For us, the way we’re doing ML is we’re handling it has a horizontal technology across all the products.
One key, too, is our investment in the software side. For this hardware to take shape–we’re doing these Arm NN libraries. That’s a big key in terms of getting that adoption out there for developers. We don’t talk about it much, but about 50 percent of our engineering resources combined with Softbank are dedicated to software now.
The company’s north of 6,000 people now. At my first TechCon five years ago we were about 2,500 people. Most of the new people are engineers. We’re seeing a lot of growth.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more