Intel corporate vice president Naveen Rao.

Above: Intel corporate vice president Naveen Rao.

Image Credit: Dean Takahashi

VentureBeat: When I was talking to Naveen Rao [head of AI products group at Intel], we brought up that notion of Moore’s Law slowing down. In the past, you had these free improvements when you went from one manufacturing node to the next. You shrank the circuits. But that doesn’t get taken for granted anymore. The opportunity to make more breakthroughs swings back to design. Does that fit into the context here in some ways?

Singer: As somebody who has been on the architecture and design and EDA for many years, I can tell you that we always looked at this as two driving forces. We didn’t rely only on one. The process had its cadence. On the architecture and design, we always pushed for what we could do on the architecture to get more instructions per cycle, the IPC. Better efficiency by not having to move unnecessary data. All the things that are processor-dependent.

You can look at it as–there’s a process improvement track. There’s a design improvement track. We were always pressed to get the best advancement that we can. We feel it now. Software is the same. We look at the architecture and design and the software as something that needs to move forward and make big gains alongside the process improvements.

VentureBeat: You still have different categories of graphics chip research going on, the CPU, and then the AI solutions. I don’t know if CPU is steered still toward the PC and the data center. Do you see a convergence of architecture, or something more like a bifurcation of architecture, where you need to keep doing separate things?


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

Singer: AI is not separate from CPU. AI is and has to be embedded in every single technology line. CPU has AI built into it. They do a lot of other things, of course, but the investment we have in software is probably as large as we have in hardware. How do you use AVX for AI? How do you use the new VNNI for AI? How do you design the next construct to improve AI? If AI is not something that’s orthogonal to CPU, it’s something that permeates all architectures. Everything — CPU and GPU and FPGA — has to have AI capabilities that improve over time at all levels. There are things that are almost solely AI, but everything has to have some AI.

In terms of the architectures, we have a few architectures because we believe that the needs are so diverse that trying to stretch a single architecture to capture the half-watt to 400 watts, or the latest sensitive architectures to the throughput-oriented, from the data center environment to others–we believe there are a few prototypical architectures that provide the best solutions for very different sets of requirements. But we don’t want to have more architectures than we need.

Whenever we can reuse a technology, we’re doing it. We’re moving technologies from Movidius into the CPU, or things that we’ve learned in general purpose from CPU to some of the accelerators, to give them some of those capabilities. We have some centers of gravity for architecture, for areas that are significantly different from each other. But we try to share the basic underlying technologies between them as much as we can.

VentureBeat: Is there some reason to restart a graphics investment, stand-alone graphics?

Singer: It’s a big market in general, regardless of AI.

VentureBeat: But nothing related to something else in the future? It’s just a good time to go back into it?

Singer: Intel is looking at the overall data-centric opportunity. They talked about it in the data-centric innovation summit. We see that as a tremendous opportunity overall. We’re looking at various types of data-centric applications. This is not an AI view. This is a broader view. GPU has a lot of advantages and capabilities that are applicable in a data-centric world. Intel wants to have a portfolio where, as a customer, when you want to have your space, whatever is optimized around whatever type of applications, Intel has a top of the line solution for you.

VentureBeat: I tend to hear this more from the Wall Street types, people like me who don’t really understand chips, but there’s this fear of competition. “Intel has to worry about AMD in PCs again.” The focus on competing against Ryzen or whatever. “Intel has to worry about ARM coming to the data center. Intel has to worry about Nvidia in GPUs and these AI processor startups.” So many things to worry about where Intel has no single silver bullet against all of this competition. That’s an outsider’s view of Intel. I wonder what your insider’s response would be.

Above: This is the first prototype from ARM-based processor designer Ampere, an Intel rival.

Image Credit: Ampere

Singer: The answer is really simple. It’s not simplistic, but it’s really simple. We’re not developing those products and technologies as a response to competition. The best way to stay ahead is to have a focus on what we think are the customer needs and the technology leadership in these various areas. We work toward that. When a competitor suddenly has a good product, yes, it creates more competitive pressure, but it doesn’t divert us to do something different because of that. The best strategy in the long term is to focus on what we believe.

We have such an intimate understanding of what’s needed in the PC, what’s needed in the data center, what’s needed for networking. Focusing on what we believe is the leading edge capability is a much better strategy for us than trying to do it as a response to some particular feature or capability from a competitor. We’re staying the course and making sure we execute well on our strategies. That’s not a response to a particular competitor doing this or that. It’s just working toward what we think are the best solutions in each of those spaces. That’s worked well for us.

In the past, when we did have competitors, we had the right focus on executing to the strategies. Eventually we were successful. We continue the same approach that’s worked for us.

VentureBeat: What are you optimistic about or looking forward to?

Singer: The more change there is, the more understanding of compute and hardware and software and so on is needed, the better the environment for Intel to differentiate. Something that I’ve seen in terms of looking from a perspective for many years in various architectures and various spaces in compute–in many cases in the past, the problem was well-understood. You knew what you needed to do for a graphics chip, for a CPU, for an imaging solution. The differentiation was how well you solved a well-understood problem.

In the AI space, the problem changes so quickly that if you work in 2019 on a good problem from 2017, you’re solving the wrong problem. Part of what we bring to the table is a connection and understanding of how the problem is changing, and therefore what problem needs to be solved two years out. Not only what’s the best solution for a well-defined problem.

Above: Huma Abidi, director of software optimizations in Intel’s AI group.

Image Credit: Dean Takahashi

Huma Abidi: I’m the director for software optimizations in the AI product group. My focus is to make sure that software optimizations get the best performance of the Xeon processor. My team works with the different framework owners, open source frameworks like TensorFlow for Google, MXNet for Amazon, PaddlePaddle for Baidu, and so on.

The whole point is that both our hardware and software portfolio are very broad. In hardware we go from data center to edge to device. Similarly, software supports all of that, based on what the user persona is. For our library developers we have something different, like building blocks. For our data scientist we have these frameworks and contribute to all of that. For application developers we have toolkits.

With these different software frameworks we have, to support the hardware we have several ways of doing that. One is what we call direct optimization, where we work directly with the framework owners. All the optimization work my team is doing, that people are doing at Intel, we just merge it into the main line. Our developers get all the benefit of the work we’re doing when they’re on CPU.

nGraph is a new framework-neutral graph compiler, which is a sort of abstraction layer between the many different frameworks and architectures that we have.

Singer: It’s like a middleware, a middle layer, from many to many. Many frameworks to multiple hardware solutions.

Abidi: In the past couple of years we have made great progress. We’ve dedicated ourselves to make the AI experience great at Intel. We’re seeing up to 200x performance gains in training, and inference is more like 250x. As a result, we now have our partners in retail or finance or education or security, every space — especially in health care. Novartis is a good example, where we worked with their engineers. They had an interesting challenge, where they had to analyze very large images in drug discovery, 26x more than the regular data sets we see. It turns out Xeon was the best solution for that, because of the large memory capacity. Working together with engineers, using our optimized TensorFlow, scaling it up to eight times, we were able to reduce training time from 11 hours to about 31 minutes.

We’re working with all these different segments. The results we’re seeing–Stanford has the DAWNBench competition. The best numbers that came in for inference were from Intel. This is a combination of the optimized framework and the Xeon processor. Together with that low cost and low latency, we’re making huge improvements.

VentureBeat: Should the young people studying this stuff go into hardware and chip design, or into software.

Singer: Both, absolutely! People should go where their heart takes them. There’s lots of room for software. We have a lot of investment in software. There’s tremendous innovation happening in the software space. But the hardware space is exciting as well.

VentureBeat: I guess they want to know where they can make the million-dollar salaries.

Singer: [laughs] AI today is a space where extremely talented people can have very high premiums. Whether they solve it at the hardware level, whether they solve it with new topology, whether they solve it with a new software compiler that makes hardware that much more efficient, doing the right thing with AI, being on the leading edge, being creative, that has a very high premium, because of the high value AI has for so many industries.

Abidi: AI has a very interdisciplinary thing going on. It’s not just computer science. There are people in statistics, people in medicine. I see more diversity in data science than I’ve ever seen in computer science. As the trends progress in AI I see more and more people, at least on the software side. I’ve seen big uptake on that.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.