Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Navin Shenoy, executive vice president at Intel, introduced more than 50 new products yesterday at Intel’s “data-centric” event in San Francisco. They range from the second-generation Xeon Scalable flagship processor to the Optane memory chips that will dramatically improve the capacity and density of data storage.

Shenoy said those products are necessary to feed the beast of demand for cloud-based services, from Netflix movies on demand to sensor analysis for self-driving cars.

The trends fueling the data-centric world include a proliferation of cloud computing, the growth of AI and analytics, and cloudification of the network and the edge. In the past five years, Intel saw a 50% increase in compute demand, and it predicts the same again in the next five years.

The demand for diverse workloads is increasing, so Intel has been investing to move data faster with ethernet and silicon photonics, store more data with Optane products, and process everything with CPUs, FPGAs, and custom chips. And for once, Intel isn’t trickling out its products — it’s launching them all at once. I talked with Shenoy at the event.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Here’s an edited transcript of our interview.

Navin Shenoy: You heard a lot from us today. The general high-level view of the company is we’ve been transforming for three or four years now. Hopefully, you saw today’s announcements reflecting that transformation. Architecting the future of the datacenter and the edge requires, we think, a broader approach: Move data faster, store more data, process everything. We’ve obviously, for a number of years now, built out that portfolio, but we never tried to bring it all together. That’s what today was about.

We’re on a journey. I told my team today, “Welcome to the starting line.” We’re on a journey to get after solving customer problems in moving data faster, storing more data, and processing everything. Today’s the first step.

Above: Navin Shenoy of Intel says 50% of all data was created in the last two years.

Image Credit: Dean Takahashi

VentureBeat: Why does today feel different from, say, product launches that happened six months or a year or two years ago?

Shenoy: Historically, we would have introduced seven new products at seven different events at seven different times. We would have had our individual product teams talking about the virtues and benefits of their products in isolation. Two or three years ago, we started to recognize that if you start from the workload and work your way backward — if you start from the customer problem and work your way backward — you can’t do that. You have to think about the problem holistically and try to solve it holistically. You miss opportunities if you don’t think about it that way.

The Twitter example today is a great example. I saw Matt talk about this privately at an event last summer. I asked him if he would come today and talk about how they found a bottleneck that they didn’t realize they had, by introducing the caching tier in their infrastructure with NAND, and how that then transformed the way they thought about compute resources. They didn’t realize there were storage bottlenecks, and they couldn’t take advantage of the CPU infrastructure. It’s a great example of reduction in TCO, reducing the footprint in the datacenter, and improving performance by using a higher-end CPU and introducing a new cache. You can imagine we’re working together with them on further ways to re-architect things as we think about the future.

We’re having that conversation hundreds of times a week with hundreds and thousands of customers. If you really want to figure out how to architect the future of the datacenter and the edge, you have to do it holistically, end to end, from the interconnect to the memory and storage to the compute.

VentureBeat: Have you updated your revenue number as far as where AI revenue for the company is?

Shenoy: No, we haven’t. We disclosed $1 billion in 2017. We haven’t updated it since. You can imagine that it’s growing fast. You can imagine that it’s growing faster than the baseline revenue of the company. But that’s as far as I want to go on that.

Above: Navin Shenoy introduced more than 50 Intel products this week.

Image Credit: Dean Takahashi

VentureBeat: How well are you doing against Nvidia on the training side of things?

Shenoy: First of all, I’d just say — one thing that’s important to know is that inference and training are going to evolve over time. Today they’re roughly 50/50. As I forecast where things are going, three to five years from now, inference is going to be an order of magnitude higher, multiples higher than training in terms of the amount of the compute workload that happens in the datacenter and at the edge.

That’s why, first and foremost, you’ve heard us talk a lot about embedding AI inference capability into Xeon and why we’ve talked about building a discrete inference accelerator. That will come out in 2020. That’s why we acquired Movidius to do edge inference for low-power domains and why we’re using FPGAs for low-latency inference in the datacenter, as well. It’s important to recognize that inference is where the action is going to be over time.

On training, we have a portfolio coming out in 2020. We’re making progress on that portfolio. You’ll hear more about that portfolio from us as we get closer. But it’s really happening in 2020.

VentureBeat: Does that include a GPU?

Shenoy: Yes. We’ve said 2020, and as we get closer you’ll hear more from us on that.

VentureBeat: As far as the investment level that’s happening on the inference side or on the training side, how would you compare that? Are equal amounts still going into both, or will it be very different?

Shenoy: We’re going to invest consistent with market opportunity. On inference, while it’s not well-known, most AI inference happens on CPUs today. You saw today that we’re not resting on our laurels. We’re continuing to push on innovation. Adding DL Boost to Xeon is akin to adding MMX to Pentium way back in the day. What’s happening with the workload — we’re embedding it into the most volume datacenter CPU in the industry and unleashing all sorts of capabilities that people haven’t even dreamed about. At the same time, we parallel invest in discrete inference accelerators. That’s well underway, and probably our most significant investment.

But training is important, too. We’re going to invest to participate in that part of the market. We don’t think we have a monopoly on all the best ideas, by the way. You’ve seen us invest in the startup ecosystem for AI. We invested in a company called Habana in Israel. Yesterday, we announced an investment in a company called SambaNova here in Silicon Valley. We have a broad portfolio approach.

AI is already the fastest-growing workload in the datacenter, and it will be the fastest-growing workload at the edge. It’ll be part and parcel of everything we do. Three, four, five years from now we won’t talk about AI as its own thing. It’ll be like the way we used to talk about the internet in the late ’90s. What is this internet thing? And then five years after that you didn’t talk about the internet. It’s just part of everything you do. I believe AI will be similar.

Above: Navin Shenoy runs the data center group at Intel.

Image Credit: Dean Takahashi

VentureBeat: Google’s Stadia cloud gaming project raised a lot of eyebrows. Does that seem like just one thing that’s happening that’s going to come along and create much more demand on the datacenter side?

Shenoy: The idea that media of all types are going to be increasingly delivered on demand, in interactive fashion, is not a new idea. Netflix, or ByteDance, or Facebook Live, or any kind of online gaming — this is just an evolution of the idea that people want to be able to consume all different types of media in a scalable fashion online. The infrastructure has to be built in a way to handle unpredictable surges.

I was talking to Twitter earlier about how it’s very difficult for them to predict what the next surge is going to be. They know that, say, the World Cup is coming, so there will be a lot more tweets. But they don’t know when every event is going to occur that causes a surge. The only way you can handle that is by building an agile, flexible infrastructure. I don’t think online gaming, cloud gaming, is any different from that. It’s a similar phenomenon. It’s a hit-driven business. You never know when a game is going to take off. You have to build infrastructure in a flexible way to handle that.

VentureBeat: It still seems like the quality of the network, wherever you are, is going to determine what you get.

Shenoy: For sure. Technology is very unevenly distributed in the world. In my opinion, there’s always going to be a heterogeneous set of solutions for things that require low latency. This is why I talk a lot about compute moving closer to where the data is being created and consumed. There’s a reason why, through evolution, the sensors of the human being, eyes and ears and nose, are close to the compute. I think the computing world will evolve in a similar way. Computing will be closer to the sensors, the place where data is being created and consumed.

It’s inevitable that there’s going to be a massive build-out of compute closer to the user. I don’t think you’re ever going to see a world where all the compute sits in a faraway datacenter and a little bit of compute sits on your body. There’s going to be compute distributed throughout the network. I think that’s only going to accelerate as we move to 5G, which is a real profound shift. Industry observers will probably overestimate that in the short term, but underestimate it in the long term.

VentureBeat: Is there another way to translate how significant the 50 products introduced today will be to consumers?

Shenoy: It’s a fundamental transition for us It’s going to be the fastest-ramping Xeon we’ve ever introduced. It has impact on the world in a way that’s difficult for us to imagine and difficult for us to predict. As you heard in some of the numbers being quoted today, the IT industry is at least a trillion dollars today. Digital transformation is this buzzword cliché that’s now moving from being two words on a page to being foundational to the way companies think about staying competitive, industries think about evolving, and economies and nations think about being competitive.

It’s difficult for me to put a number on it, but for an industry that’s a trillion dollars, something so foundational is going to have a major impact on the world. It’s what gets me excited. It’s why I drive into the office relatively quickly in the morning, because not only do we get to innovate, but we get to do it at scale and make a big impact on the world. I know my team’s excited about it. Our engineers are excited about it. Our customers are excited about it too.

VentureBeat: If this is your fastest ramp, does that mean your manufacturing issues are out of the way now?

Shenoy: On Xeon we haven’t had any challenges. We’ve prioritized Xeon. We haven’t constrained our customers’ growth at all. We’re ready to go. We’re going to ramp this as fast as we’ve ever ramped any product in our history.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.