Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Silicon Valley is full of chatter about artificial intelligence, deep learning neural networks, and machine learning. And Intel, the world’s biggest chip maker, is becoming a lot more conversant in that chatter today.
Intel executive Diane Bryant announced today that the company is working on a next-generation version of its high-end server chip, the Xeon Phi, for A.I. applications.
Baidu will use the upcoming Xeon Phi chips in the data centers it is building for its Deep Speech platform, where its networks will be able to parse natural language speech as quickly and accurately as possible.
By 2020, there will be more servers handling data analytics than any other workload, Bryant said.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Intel’s chips have been speedy number crunchers for the longest time. But in recent years, Nvidia’s graphics chips have become a lot more useful in servers dedicated to neural networks, which can process unstructured data such as video or speech and recognize patterns more easily.
To respond, Intel has started focusing more resources on central processing units (CPUs) that can handle more deep learning tasks. And Intel is betting that an improved CPU, and Xeon Phi in particular, is the answer. The new chips, code-named Knights Mill, will arrive in 2017.
Intel also acquired Nervana, a San Diego, California-based deep learning startup, for more than $350 million last week. That team will help Intel on multiple levels with deep-learning cloud applications and a development framework. Jason Waxman, corporate vice president for cloud computing at Intel, said in an interview with VentureBeat that the Nervana team will be broadly useful for Intel’s A.I. efforts.
Intel argues that its Xeon Phi chips will run at “comparable levels of performance” to Nvidia’s graphics processing units. Of course, Nvidia begs to differ, and it said so in a blog post yesterday.
Bryant said that the improvement in performance with CPU-based processing is huge because the processor can access memory much faster, which is important as the size of the task scales upward.
Intel is also partnering with the National Energy Research Scientific Computing Center to optimize machine learning at huge scales.
Jing Wang, senior vice president at Baidu, said, “The next era is the era of artificial intelligence. It is technology that change people’s lives. Baidu is very excited about” using A.I. for speech and natural language processing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.