Join Transform 2021 this July 12-16. Register for the AI event of the year.


Intel today announced plans to release Nervana Neural Net L-1000, code named Spring Crest, to make it easier for developers to test and deploy AI models. Intel first introduced the Neural Network Processor (NNP) family of chips last fall. Spring Crest will be 3-4 times faster than Lake Crest, its first NNP chip, said Intel VP and general manager of the AI product group Naveen Rao.

The Nervana Neural Net L-1000 will be Intel’s first commercial NNP chip and will be made broadly available in late 2019. The news was announced today at Intel’s first-ever AI Dev Con being held at the Palace of Fine Arts in San Francisco.

“We also will support bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs. This is part of a cohesive and comprehensive strategy to bring leading AI training capabilities to our silicon portfolio,” Rao said in a statement.

The new addition to the Neural Network Processor family of chips follows the rollout of AI Core, a circuit board with Movidius Myriad 2 Vision Processing Unit to give manufacturers on-device machine learning. This follows the release of the Neural Compute Stick with similar power.

In recent weeks Intel has taken a series of steps to grow its presence among customers interested in the proliferating number of applications of AI.

Building upon its Computer Vision SDK, last week Intel released OpenVINO, a framework for visual AI at the edge, and Movidius, a computer vision startup acquired by Intel in 2016, will be used in 8 million autonomous cars.

Earlier this month, Microsoft announced Project Brainwave in preview for acceleration of deep neural network training and deployment powered by Intel’s Stratix 10, a field programmable gate array (FPGA) chip.

As companies like Nvidia and ARM garner reputations for graphic processing units (GPUs) optimized for image processing and companies like Google create specialized chips for AI, Intel has been said to have fallen behind with slower general CPU chips.

Intel executives and partners spent much of the morning highlighting improvements to the Xeon CPU chip — like a 3x performance boost when working with TensorFlow — and arguing that since much of the world’s data centers run on Intel processing, Xeon still carries out the majority of the training and deployment of most of the world’s AI.

Also announced today: The Intel AI Lab plans to open-source its natural language processing library.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member