Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Intel CEO Pat Gelsinger promised on Wednesday that the semiconductor industry would meet or beat Moore’s law (the rule of thumb saying that processing power doubles every two years) for the next decade.
His remarks are significant because many in the industry have assumed Moore’s law is no longer valid, and that software is destined to make more advances in efficiency than hardware.
After it took 12 years to transition from petascale to exascale computing, Gelsinger said he is challenging his team to get to the next order of magnitude, zettascale, within five years.
He made the remarks in a brief video interview with Intel cofounder Gordon Moore included in Intel Innovation, an event designed by Intel to rekindle excitement about Intel products in the developer community.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Innovating for developers
Gelsinger and other company leaders made their pitch to serve the most extreme requirements of hyperscale cloud vendors, while also promising to make advanced capabilities available for all.
Even the introductory customer video that played before the keynote included references to Intel “finally” paying attention to developers again.
“Clearly, we’ve got some work to do,” Gelsinger said.
Meanwhile, the seesaw movement of networks from being centralized around mainframes, then distributed to PCs, then becoming more centralized in the cloud (a trend likely to continue for the next few years) will swing back toward decentralization with edge computing bringing intelligence as close as possible to the user, Gelsinger said.
Performance for enterprise
More specific announcements included Intel’s partnership with Google on the development of its Mount Evans intelligent processor unit (IPU) and an associated infrastructure programmer development kit for making networking and datacenter infrastructure programmable. And while that might seem a capability designed by a cloud vendor for use by cloud vendors, sophisticated enterprises like financial services firms are already taking advantage of programmable infrastructure technologies like the Intel Tofino 3 fabric processor for network switches and the P4 language for network programming to achieve high performance for applications like high-frequency trading, according to Nick McKeown, senior VP, general manager, and Senior Fellow with Intel’s Network and Edge Group.
These technologies will allow organizations to modernize how they manage networks and datacenters, McKeown said. For example, suppose your network is dropping packets. “We’ve been using ping and traceroute to diagnose network problems since I was a student,” he said, but often such techniques don’t capture transitory problems that might happen in the space of milliseconds.
Now it becomes possible to program every element of the network and server infrastructure, McKeown said. “You just write a small program, running in the IPU or the switches, and you can decide whether you need it running all the time or just when needed — because it’s just a program,” he said.
And these programs can run at “line speed,” meaning no degradation in performance, he said. “This would have been unimaginable just a few years ago because it would have come at such an expense in terms of loss of throughput,” McKeown added. Such work does require a pretty sophisticated programmer, but organizations with high-performance requirements are willing to do it, he said.
AI for the masses
At the same time, Intel is working to make sophisticated computing capabilities like machine learning accessible to more people, with initiatives like oneAPI toolkits and the OpenVINO AI inference engine for Intel processors. “We’re making this technology more accessible with low-code to no-code development tools,” said Sandra Rivera, executive VP, and general manager of the Datacenter and AI group at Intel. “I like to say it’s the AI you need on the processor you have.”
Intel aims to further ramp up the broad availability of AI with Sapphire Rapids, the code name for its next-generation Xeon processor, which it is promising will deliver a 30X performance boost. That means AI developers will be able to work with a general-purpose processor “and not the expensive and power-consuming accelerators we have in the market today,” Rivera said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.