After four years of work, Xilinx is today announcing its adaptive computer acceleration platform (ACAP), which is a new kind of programmable chip that will take flexible hardware to a new level. The company spent more than $1 billion to develop the new chip.
San Jose, California-based Xilinx makes field programmable gate arrays (FPGAs), or chips that can be programmed through software, changing their basic functions in the field or after they’ve shipped. The company said that the creation of the new ACAP chip is as significant as the invention of FPGAs in the 1980s, as it can broaden the use of programmable chips across a broad set of applications for the era of big data and artificial intelligence. For deep neural network processing, it could achieve a 20-fold increase in performance.
The ACAP is a kitchen soup of chip features — dubbed multi-core heterogeneous compute platform — that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP can adapt dynamically during operations, and that can make it more efficient in performance and performance-per-watt than central processing units (CPUs) or graphics processing units (GPUs), said Victor Peng, CEO of Xilinx, in an interview with VentureBeat. The ACAP will have an astounding 50 billion transistors.
“This is a major technology disruption for the industry and our most significant engineering accomplishment since the invention of the FPGA,” said Peng. “This revolutionary new architecture is part of a broader strategy that moves the company beyond FPGAs and supporting only hardware developers. The adoption of ACAP products in the datacenter, as well as in our broad markets, will accelerate the pervasive use of adaptive computing, making the intelligent, connected, and adaptable world a reality sooner.”
The ACAP can be used in applications such as video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage, and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge, and cloud applications.
“We can do very complex system designs and still have that flexibility in programming,” Peng said. “You can still change things in the design at the last minute as protocols change.”
The first ACAP product family, codenamed Everest, will be developed in a 7-nanometer manufacturing process in TSMC factories for production later this year.
An ACAP has at its core a new generation of FPGA fabric with distributed memory and hardware-programmable digital signal processor (DSP) blocks, a multicore system-on-chip, and one or more software programmable — yet hardware adaptable — compute engines, all connected through a network on chip (NoC). That means it has a lot of built-in hardware capability that is programmable.
“This is what the future of computing looks like,” said Patrick Moorhead, analyst at Moor Insights & Strategy, in a statement. “We are talking about the ability to do genomic sequencing in a matter of a couple of minutes, versus a couple of days. We are talking about datacenters being able to program their servers to change workloads depending upon compute demands, like video transcoding during the day and then image recognition at night. This is significant.”
ACAP has been under development for four years at an accumulated R&D investment of $1 billion dollars, Peng said. There are currently more than 1,500 hardware and software engineers at Xilinx designing ACAP and Everest. Software tools have been delivered to key customers. Everest’s design will be done in 2018, and the company plans to ship chips in 2019.
Everest is expected to have a 20 times performance improvement on deep neural networks compared to today’s latest 16-nanometer FPGA chip. Everest-based 5G remote radio heads will have four times the bandwidth versus the latest 16-nm-based radios.
A wide variety of applications across multiple markets like automotive; industrial, scientific, and medical; aerospace and defense; test, measurement, and emulation; audio/video and broadcast; and the consumer markets will see a significant performance increase and greater power efficiency.
Peng said unstructured data represents about 90 percent of the data that needs to be computed. And AI is still in the early stages of disrupting all industries. AI will be in just about all applications, from end points to the edge to the cloud.
“There is no canonical architecture that will work for all workloads,” Peng said. “Innovation needs adaptability, like survival of the fittest in nature. That’s our vision for the future of computing.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more