Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

RK Anand — along with Ashwini Choudhary, Eye-Fi founder Eugene Feinberg, former Lilium sensor systems engineer Gilles Backhus, and Valerie Chan — believes self-driving cars have a massive compute problem. While the AI models that guide the cars’ decision-making have powerful servers at their disposal for training, inferencing — the stage at which the algorithms make predictions — must be performed offline to ensure redundancy. But even fully trained models require powerful (and power-hungry) in-car PCs for real-time processing, a paradigm Anand and company believe is unsustainable.

That’s why in 2018 they cofounded Recogni, a San Jose, California-based company that’s developing a low-overhead perception system for autonomous vehicles. The pitch was evidently attractive to investors, who committed $25 million to the company’s series A round announced today. GreatPoint Ventures led the round, which saw participation from Toyota AI Ventures, BMW iVentures, automotive technology firm Faurecia, Osram’s Fluxunit, and DNS Capital.

“The issues within the … autonomy ecosystem range from capturing [and] generating training data to inferring in real time. These vehicles need datacenter-class performance while consuming minuscule amounts of power,” said CEO Anand, who added that the fresh funds will be used to grow Recogni’s engineering team. “Leveraging our background in machine learning, computer vision, silicon, and system design, we are engineering a fundamentally new system that benefits the auto industry with very high efficiency at the lowest power consumption.”

Recogni’s integrated module — which comprises three passively cooled image sensors, an external depth sensor, and and a custom inferencing chip — can perform up to 1 peta operations per second while consuming about 8 watts of power, thanks to an approach that offloads central processing to multiple points on a vehicle. It captures and analyzes up to three uncompressed 8-12 MP streams at 60 frames per second, achieving a claimed 70% compute efficiency in typical vision applications (or up to 500 times better than several leading platforms).


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here


Recogni boldly claims its system outperforms rivals by more than two orders of magnitude on perception tasks like image classification, objection detection, action anticipation, and depth inference. On benchmark ResNet 50, it’s able to classify 92,105 images per second; on RetinaNet-101-800, it performs 1,750 inferences per second; and on R(2+1)D, it can spot 833 people concurrently. On DepthNet, Recogni says it’s capable of analyzing 3,500 scenes per second.

Recogni intends to initially target level 2 autonomous vehicles — as defined by the Society of Automotive Engineers — which includes those equipped with advanced driver assistance systems (like Cadillac’s Supercruise, Nvidia’s Drive Autopilot, and Volvo’s Pilot Assist) limited to highways and marked roads. In the near future, it plans to pivot to platforms for level 3 vehicles (that can steer, accelerate or decelerate, and pass other cars without human input) and level 4 cars (that can largely drive themselves without constant human intervention), with the goal of eventually enabling level 5 cars (that can do anything a human driver can do).

Strictly vision-based approaches to autonomous driving are by no stretch universally lauded, but they’re advocated by Intel’s Mobileye, which is developing a custom accelerator processor chip — EyeQ5 — that offers 360-degree coverage, courtesy of proprietary algorithms, cameras, and ultrasonic. Similarly, driverless semi-truck startup TuSimple says its camera-based technology (which employs lidar largely for redundancy) has a 1,000-meter detection range, and Beijing tech giant Baidu recently debuted a vision-based vehicle framework (Apollo Lite) that it claims has demonstrated full autonomy on public roads.

Recogni has a competitor in Mobileye, as well as Tesla, which in April detailed a Samsung-manufactured chipset featuring over 144 tera operations per second (TOPS) of neural network performance. (On an internal Tesla benchmark, it was able to process 2,300 frames per second while sucking down 1.25 times less power than its predecessor.) Nvidia — another formidable rival in the space — says Drive Xavier, the system-on-chip at the heart of its Drive AGX Pegasus autonomous vehicle development platform, draws just 30 watts.

But Recogni isn’t a fly-by-night operation. Anand has confidence its pedigreed roster of engineers, managers, and data scientists — who hail from Intel, Juniper Networks, Kumu Networks, Sun Microsystems, Silicon Graphics, Xsigo Systems, NavVis, and Cisco, among others — have the chops to take on the industry’s best-funded incumbents.

“This [funding] round, one of the largest initial venture rounds raised by any AI silicon company in the space, is a testament to our experience and responsible approach,” he said.

Recogni has operations in Munich and its Cupertino headquarters.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.