Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

We’re getting a lot of “verse” words from technology companies these days, many of them descended from the Metaverse, the universe of virtual worlds imagined in Neal Stephenson’s Snow Crash novel from 1992.

The most recent iteration is Arm’s Neoverse, a cloud-to-edge infrastructure that the chip design company hopes will support a world with a trillion intelligent devices.

The Neoverse is basically Arm’s ecosystem for supporting the chip design and manufacturing firms that will produce those devices, based on the Arm architecture. But it’s also a market-based approach to supporting customers in different segments, like automotive, machine learning, or the internet of things (IoT).

Arm is also stepping up with a more aggressive roadmap for processors that make use of the most advanced manufacturing possible. That means the company is targeting everything from low-power embedded processors to high-end chips for servers.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

I spoke with Haas about Neoverse and other topics at the Arm TechCon event last week in San Jose, California.

Here’s an edited transcript of our interview.

Above: Rene Haas is head of the intellectual property division at Arm.

Image Credit: Dean Takahashi

VentureBeat: Is there a particular interesting theme here for you?

Rene Haas: It’s the sum of the parts, what all the guys are talking about. I’m curious what you thought about the Neoverse piece. That’s a major point of emphasis for us in terms of investing in the infrastructure and everything involved with that, but at the same time, as we’ve gone with this more market-based approach, thinking about growing each of the businesses around specific markets, you’ll see similar things around automotive. You’ll see more around the ML space and so on. It’s the continued culmination of the strategy we’ve been building up over the last couple of years.

The infrastructure stuff is interesting, because–a lot of people have questioned this, talking about market share in servers and what that means. For us, we view a lot of the investment we’re making in the infrastructure–it will have a lot of potential flyover into autonomous driving, for example. The same high-end compute platform we’re doing for Zeus and Poseidon I think will transfer well toward autonomous driving and things of that nature. We’re seeing a reaffirmation that the strategy seems to be working.

VentureBeat: Putting it all together like that, what does it achieve for the customers of Arm? They can get a full spectrum of choices?

Haas: Exactly, on a few fronts. One is, there’s a broader choice, whether it’s machine learning IP or GPU IP or CPU IP. At the same time, it’s also having the schedules of all the products lining up appropriately to hit certain market windows. It sounds kind of obvious from the outside. Wouldn’t you have all your products aligning to the same cadence to go off and hit a sample time? But not always. We were finding out that partners, potentially, were designing a next-generation SOC with this year’s CPU, but next year’s GPU, and some interim system IP.

It’s really all about–we want to enable our partners to build better SOCs and build butter phones, or build a better laptop, or build something that’s better tuned for the end market. It’s a combination of choice, but also, it lets us look at each of these markets and make sure we’re investing in the right level of performance that’s going to move the needle on the system. That’s a big piece of it.

Machine learning is a pervasive underlying technology that applies everywhere. It’s not just the accelerator. Part of it is doing a dedicated hardware accelerator, but the other is adding ML extensions to the GPU and CPU, and then having the whole environment, whether it’s through Arm or through the compute libraries, that pulls it all together. Automotive is another area. You need things like split lock. You things like functional safety. There are all these special attributes required that we weren’t doing as good a heterogeneous job across all our products.

Arm's roadmap

Above: Arm’s roadmap

Image Credit: Arm

VentureBeat: This added element there of the high-end cores that are going to come out on a more regular schedule at certain performance targets and manufacturing nodes–that’s a more clear communication that you’ve done of that than before.

Haas: It’s a combination of more clear communication of intent, plus more clear communication that, behind the scenes, we’ve always been working pretty closely with the foundry guys. But now we’re being up front. When you see us talking about certain technologies tied to a certain node, we’re working closely with fab partners to achieve that. Given all the investments that Samsung and TSMC are making in advanced node technology, it’s pretty key.

VentureBeat: I did wonder, with things like Intel and Global Foundries slowing down and dropping the pace of Moore’s Law, whether you’re able to do this with confidence, given that it is possible that TSMC and Samsung could be affected by the same things.

Haas: We continue to see a pretty heavy capital investment from TSMC. Drew’s slide today talked about the number of wafers that are driven by the Arm ecosystem compared to x86. Most important there is that it’s all on the leading technology nodes. We’re seeing a lot of the partnerships driving the advanced technology. I think we’ve hit a point where the external guys — the people who run fabs for a living — are setting the cadence, as opposed to people who have integrated factories, like Intel. That’s good for us.

Five years ago, we were talking about servers. But five years ago we didn’t have much of a 64-bit story. Most of the products were 32-bit. We didn’t have much work done in terms of software ecosystem. A lot of that stuff is now behind us. We’re now moving to the next wave, where the software ecosystem is getting mature. We have very competitive 64-bit products. Now, process-wise–five years ago you’d argue that the blue company was the world leader by a good margin. Now it’s moved around a bit. That’s why we think the opportunity space is pretty profound. That’s why you saw us talking about Neoverse the way we did today.

VentureBeat: I wonder, though, if there’s some uncertainty to the schedule, because the arrival of the nodes is not as clockwork as you might hope.

Haas: I don’t know. Again, the demand for cloud infrastructure product is massive. It’s just massive. The stuff that we help with is–you’re going into these rack systems that have a fixed footprint in terms of power. It’s all about maximizing performance in that power envelope. Process really helps you. I’m not seeing that. Sure, there’s always risk, no doubt about it. There’s less risk as far as, “Are the fab guys investing to make it happen?” as opposed to, “There’s definitely obvious execution risk” because there always is on new stuff.

ARM expects to manage a trillion devices in the Neoverse.

Above: ARM expects to manage a trillion devices in the Neoverse.

Image Credit: ARM

VentureBeat: When Facebook announced the new Oculus Quest, their wireless stand-alone VR headset, they said it would launch in the spring. Some people thought they would have the 845 processor in it, from Qualcomm, and instead it had an 835. I don’t know whether that speaks to some of what you just talked about, or whether people might have an unrealistic understanding of what you can cram into a certain device on a certain timetable. But it’s odd to see these situations where last year’s chip shows up in next year’s product.

Haas: It’s usually OEM-specific, relative to their development cycles, their qualification cycles, and what’s available at a certain time. Some guys are just more aggressive and move faster. Oculus guys, they operate at their own cadence. It’s probably as much a function of those kind of constraints than anything else. For example, on the laptop side, the early Windows and Arm laptops were 835, but now there’s a wave of 850 products that have come out. If folks can shrink their development cycle, that’s really what drives it.

The new Arm laptops are pretty amazing. Having lived through my Nvidia days with Windows RT, it’s night and day. I’ve not found anything it doesn’t run. You’re running full Office, full Powerpoint. You never have a situation where you download a file and the fonts don’t translate. Everything looks and feels right. But the battery life is crazy, 20 hours and more. There’s no fan.

I’m running an 835 with 4GB of RAM, and then I also have a Core i7 Thinkpad with 32GB. The Core i7 is faster, no question, and its battery life is three hours. The 835 is more than 20. The other thing that’s great is it has a built-in LTE modem. You’re always connected. I honestly use that one more often.

VentureBeat: As far as IP goes, it seems like the notion of Arm taking the world is a little more realistic now. It doesn’t seem like you guys have any big worries right now. Would you disagree? Do you still have some challenges?

Haas: As soon as I said we had no worries–there’s always worries, right? We really want to grow the business in the infrastructure. We think we have a huge opportunity in automotive. Those are two big areas. Embedded, for us–the things that will limit us in embedded–we have to solve the security issue. It’s around making sure the platforms adhere to a security standard. Things like PSA being adopted across the board.

The reason I say that is, the adoption to devices being connected and put on the network is just a function of, can it be secure? As opposed to anything else. Security around the embedded side and continuing to invest in the road map on the high end.

Arm and Xilinx are teaming up.

Above: Arm and Xilinx are teaming up.

Image Credit: Arm

VentureBeat: Tired of showing up at Black Hat and DefCon and seeing Arm appliances on stage?

Haas: It’s funny. This whole Spectre and Meltdown thing, when Arm was affected by it, there was a bit of, “Oh my gosh, Arm has a bunch of loopholes in their architecture.” But obviously, as you know, it was not about the architecture. It was about the class of bug on high-performance systems. First world problems. There were a lot of things written around RISC V. There are no RISC V products affected by Spectre and Meltdown because there are no RISC V products being used in high-performance systems that need to use cache look-ahead. The fact that we’re impacted by it is a testament that says we’re playing with the big boys. But when you’re playing in that area, all the caveats apply.

From your standpoint, are you seeing any sentiment, feeling any sentiment, that people think the company’s getting too big?

VentureBeat: Only by the emergence or existence of support for something like RISC V. People don’t necessarily want to get locked in. The funny thing is, you guys were always more open than all the other guys, right? But now that you’re bigger, people are saying, “Well, we need something even more open to be the counter-force to the dominant powers.” It’s interesting.

Haas: We’re always listening and making sure that we’re addressing the needs of the market, whether it’s around freedom to operate, freedom to innovate–we take competitive threats pretty seriously.

VentureBeat: Even if there’s a dominant platform that’s viewed as benign, like the iPhone, there’s still always something like Android appearing in the marketplace. And there’s a more clear distinction between closed and open.

Haas: Right. There’s always alternatives.

SoftBank believes IoT will drive $11 trillion in value by 2025.

Above: SoftBank believes IoT will drive $11 trillion in value by 2025.

Image Credit: SoftBank

VentureBeat: On the server side, is there a reason you have more confidence this year or the next year that the market share is going to move?

Haas: I joined Arm about five years ago. When we talked about TechCon back then, “sensors to servers” was the theme. We had a lot of guys building Arm-based silicon at the time. You had Applied Micro. AMD was poking their head in. You had a bunch of players. But the products weren’t all there. The performance wasn’t there. Obviously the software ecosystem wasn’t there.

If I look at the glass as half-empty, there aren’t as many silicon suppliers as their were, but if I look at the glass as half-full–three things. One, the software ecosystem has matured. Two, the product performance is there. Third, even more so than five years ago, the demand is much higher than we thought.

When we talked about this back then, there was the cloud, and then there was classic Windows enterprise. The notion was, Windows enterprise was a big market, but the cloud was where the growth was. But still, not to the level we see now. It seems almost insatiable. Drew gave the example, where we’ll be doing a million units a year, of this network offload engine, which over the last 12-18 months has become a massive opportunity for us.

That’s a function of two things. The compute requirements have gotten so high on the x86 servers that having an x86 compute farm doing all these overhead tasks was costing AWS, or whoever the cloud guy is, compute per dollar revenue. So they convert their compute per dollar. They want to make the compute part as pure as possible, so they need an offload engine, a clean sheet of paper. You want to do that on the most efficient architecture you possibly can. These offload devices are largely based on Arm, which is a big opportunity. Going forward, the opportunity we have is to take over the compute area. I’m probably more bullish now than I was before, just because the end market demand is quite significant. And it’s only going to get bigger.

VentureBeat: On the AI side, are you starting to see any fruit?

Haas: We have a few things starting to cook design-wise that we haven’t talked about publicly, on the ML side. It’s still early days, in terms of people using dedicated accelerators. People are still trying to figure out exactly where the use cases are at the edge. Our point of emphasis is not where Nvidia is, doing big training GPUs in the cloud. We’re very focused on these inference engines at the edge that can do some level of training. But I think we’re just at the cusp where that’s going to be pretty large.

ARM CEO Simon Segers at Arm TechCon 2018.

Above: ARM CEO Simon Segers at Arm TechCon 2018.

Image Credit: Dean Takahashi

VentureBeat: Kind of like Apple’s machine learning section of its chip that does face recognition?

Haas: Yeah, something similar. It might be in an IOT endpoint that gets smarter in terms of recognizing patterns and behaviors. The Alexa assistant devices are crude first-case examples that have a UI, but you may have other devices that have a different UI, whether it’s relative to ambient temperature or biometrics, doing a bunch of learning. That’s going to be pretty large. For us, the way we’re doing ML is we’re handling it has a horizontal technology across all the products.

One key, too, is our investment in the software side. For this hardware to take shape–we’re doing these Arm NN libraries. That’s a big key in terms of getting that adoption out there for developers. We don’t talk about it much, but about 50 percent of our engineering resources combined with Softbank are dedicated to software now.

The company’s north of 6,000 people now. At my first TechCon five years ago we were about 2,500 people. Most of the new people are engineers. We’re seeing a lot of growth.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.