Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Everybody is taking a stab at designing artificial intelligence processors, or electronic chips that could become the brains of computers that act as if they were humans. The latest to tackle the task of designing AI chips is Vinod Dham, a former Intel executive known as the “father of the Pentium.” He has teamed up with some younger chip designers to build RAP chips, or real AI processors.

At AlphaICs, the team is creating a coprocessor chip that can do agent-based artificial intelligence. These RAP chips could one day be deployed in computing devices and autonomous cars to make decisions at lightning speeds, or in data centers on a massive scale.

In cars, the environment is constantly changing, with threats such as pedestrians emerging out of nowhere. RAP was designed for these conditions. “With our chip, you can do on-the-fly decision making,” Dham said in an interview with VentureBeat.

It is timely in one way. For years, Intel and other big chip manufacturers were able to create faster, smaller, cheaper, and less power-hungry chips by shrinking chip circuitry. This represented a fabrication improvement, where manufacturing experts could shorten the width between circuits from 14 nanometers (a nanometer is a billionth of a meter) to 10 nanometers and so on. It only takes four silicon atoms in a crystal lattice to make one nanometer. In a 10-nanometer process, the circuits are a tenth the length of a virus.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

Above: AlphaICs wants to create self-learning agents on a chip.

Image Credit: Alpha ICs

But even Intel has admitted that, after more than five decades of continuous progress, Moore’s law is slowing down. The law in question is the prediction in 1965 made by Intel chairman emeritus Gordon Moore that the number of transistors on a chip will double every year. By investing more than $10 billion a year in chip factories, Intel has been able to build new factories every couple of years that could move semiconductors to a new manufacturing node. Only GlobalFoundries, TSMC, and Samsung can make similar investments.

But Intel has postponed its shift to 10-nanometer chips until late next year. That has given rivals a chance to catch up, and it means that differentiating chips through design, rather than manufacturing advances, has a historic opportunity. That is, if you can’t fabricate a smaller circuit, then maybe you can design a circuit, or a full-blown chip, that is more efficient. Dham is seizing on that idea with AlphaICs, as it makes sense to come up with a whole new architecture for the age of AI applications.

“Even Gordon Moore has spoken,” Dham said. “The end is near. This could at least extend Moore’s law’s life through architecture.”

Linley Gwennap, analyst at the Linley Group, agreed with that assessment about Moore’s Law.

“Moore’s Law is delivering less benefit as time goes on and could eventually grind to a halt,” Gwennap said. “At that point, the chip design will deliver the performance increases. The cool thing about AI is no one knows what the right answer is now and people are trying a lot of different architectures. It’s a very creative period. Somebody will come up with a really good solution. It’s going to be orders of magnitude better than what we have today.”

But Gwennap hasn’t been briefed on Alpha ICs’ approach, and he is more likely to bet on Alpha ICs’ rivals, such as Graphcore, which raised $50 million in late 2017 or Mythic, which raised $40 million earlier this year. Gwennap thinks there are a number of other well-funded companies that are talking about the same kind of approach as Alpha ICs, which involves pursuing a different path than the graphics processing units (GPUs) popularized by Nvidia.

“We all agree that GPUs aren’t great for AI, but they’re what we have today,” Gwennap said. “What we really need is a bunch of chips that are optimized to run AI in a more power-efficient way. What these guys are talking about sounds like the same pitch that everybody else is promoting. Alpha ICs is talking about Tensors, which is what Nvidia is doing, and putting agents on a chip, which is what Graphcore and everybody else is doing. Everybody has the same goal. The question is who will deliver on it soon and who can demonstrate something that is better performance per watt than Nvidia’s optimized GPUs. There’s a lot of companies in this space.”

Building Alpha ICs

Above: Nagendra Nagaraja, CEO of AlphaICs

Image Credit: Alpha ICs

Of course, Dham isn’t intimidated by long odds.

The Milpitas, California-based company raised a small seed round of $2.5 million to show that its chip design can handle AI better than central processing units (CPUs, like those made by Intel) or graphics processing units (GPUs, like those made by Nvidia). The company has a team of 25 engineers.

The company, housed in the Falcon X Incubator, is the brainchild of Nagendra Nagaraja, an 18-year chip design veteran with 28 patents, and Prashant Trivedi, a seasoned chip designer and marketer with 17 years of experience. They’re relatively unknown, though Dham credits them with doing the bulk of the work at the startup. Nagaraja worked on it on his own for a while before starting the company with Trivedi.

“I came across Nagendra and fell in love with the idea,” Dham said. “Instead of doing a GPU on steroids, we took out that overhead that was there for games. We pursued a line of thinking no one else has done. We have to prove it works” by testing in on a field programmable gate array (FPGA), or a programmable test chip.

Above: Prashant Trivedi

Image Credit: Alpha ICs

Dham, by contrast, has lived the quintessential Silicon Valley rags to riches immigrant story, and he has been a fixture in the tech scene for more than four decades. Born in Pune, India, he came to the U.S. in 1975 as an engineering student with just $8 in his pocket. He became a chip engineer and helped invent Intel’s first flash memory chip, which has now become a huge multibillion-dollar business.

He went on to manage Intel’s microprocessor projects, including the breakaway Pentium chip that debuted in 1993 and cemented the company’s position as the world’s biggest chip maker. He handled the bad press on the Pentium’s bug and later joined Intel rivals NexGen and Advanced Micro Devices. He became the CEO of Silicon Spice, which he sold to Broadcom for $1.2 billion in 2000. Then he became a venture capitalist, first at NewPath Ventures and later at NEA-IndoUS Ventures. He is now president and chief operating officer at Alpha ICs.

“We have a novel way of doing new technology, and we are applying it initially to AI,” Dham said. “We are investing real AI, not GPUs.”

GPUs have been good at classification, thanks to deep learning neural network software that in the past five years has become exceedingly adept at learning to recognize objects. But those chips aren’t quite as good at being agents, or decision makers, in the way that AlphaICs envisions. In fact, when GPUs make mistakes in recognition, the results can be disastrous, Dham said.

“There are outliers they cannot forecast,” he said. “We need a tech that has more intelligence than GPU-based deep learning, that, in addition to classification, allows you to make decisions. That is a self-learning agent on a chip that makes decisions. That is what we have done.”

By contrast, there’s a lot of dumb AI out there. You could show an AI computer a toothbrush, and it may conclude it’s a baseball bat.

“It can be dangerous if you are wrong, and wasteful,” Dham said. “Deep learning is also a black box. If something goes wrong, you don’t know where. Ours is easier to debug.”

Above: 32 RAP agents play Atari Breakout.

Image Credit: Alpha ICs

In 2013, a team of DeepMind researchers trained their neural network to play Atari 2600 games such as Breakout so that they were better than the best human players. Now those games serve as benchmark tests for AI. DeepMind took about seven days of training to become proficient. in 2016, Intel used a 16-core Xeon processor and was able to do it in 24 hours. AlphaIC’s chip could do it with 64 agents in six hours.

“That was a lot of Breakout,” Dham said. “We believe we can do the most inferences per watt.”

In its first attempt, AlphaICs put 32 agents on a chip. Next it will put 64 agents on a chip that is about 225 millimeters squared. That’s a relatively small chip, which should be more power efficient than traditional computing chips. But it thinks in a different way.

“One of the problems these days is that everybody picks their own benchmarks these days,” Gwennap said.

The AlphaICs chip is a collection of computational tensors, and it assimilates feedback from the real world and reacts to it. Much of the work is done in parallel. Dham said the chips have a ten times reduction in latency, or wait time between interactions.

“Google has created a Tensor-based computer, and we have gone one step beyond, creating a group of tensors to create a hierarchy to enable a new type of compute,” Dham said. “That is the genesis of our idea. CPUs have limits. GPUs were done for gaming. It was mindlessly blasting through problems. They are not oriented to making decisions in a constantly changing environment.”

Above: AlphaICs tracks a lot of data in parallel and makes decisions.

Image Credit: Alpha ICs

Rather than raise a ton of money and ramp up, AlphaICs has stayed small and proceeded carefully. It has partnered with companies such as Microsoft, and it is working on a lot of software that is needed for its coprocessor. Dham believes AlphaICs can do its work many times faster than rivals can, but the chip also will be relatively easy for engineers to program.

“A lot of what we see out there is weak AI,” Dham said. “You could call us strong AI.”

Dham said the company hopes to get a chip in the market in the middle of 2019.

Of course, Nvidia has been working for more than a decade on doing AI versions of its GPU chips, and many of its new AI chips are designed from the ground up to handle AI. Nvidia also has the programming language CUDA, which gives it a near monopoly on much of the AI software in the world.

There’s some pressure to succeed. Dham worries that there is a risk of another “AI winter,” like in the 1980s and 1990s, when relatively little progress was made in AI. With Moore’s law slowing down, the AI chip designers and software makers have to succeed.

“GPUs brought an end to the AI winter, and they have taken off like crazy,” Dham said. “We want to create a Big Bang for real AI. For the first time in 20 years, there is an opportunity to do some creative things in chips again.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.