Machine learning is a powerful tool, capable of diagnosing lung cancer, besting champion Go players, and navigating the labyrinthine streets of San Francisco. But the neural networks that underly it tend to be inefficient — i.e., fairly demanding from a computing standpoint.
That’s where DarwinAI comes in. Its technique, which it calls generative synthesis, ingests virtually any AI system — be it computer vision, natural language processing, or speech recognition — and spits out a highly optimized, compact version of it.
The Ontario company emerged from stealth today with $3 million in seed funding, led by Obvious Ventures, iNovia Capital, and angels from the Creative Destruction Lab in Toronto.
“From autonomous vehicles to mobile devices, we are seeing edge-based scenarios where AI is having a profound impact on business outcomes. A critical challenge in this realm is designing these powerful networks to run in situations where computational and energy resources are limited,” said DarwinAI CEO Sheldon Fernandez. “It allows engineers to collaborate with powerful AI to develop efficient and interpretable network models.”
DarwinAI’s engine runs in TensorFlow (and soon PyTorch) and uses AI to obtain what Fernandez calls a “foundational understanding” of the target neural network. Specifically, it employs a generative adversarial network (GAN) — a two-part neural net consisting of a generator, which produces data samples, and a discriminator, which attempts to distinguish between the synthetic samples and real-world samples — to probe a given model as it trains.
Using knowledge it has gained about the neural network, it generates new, smaller networks that retain the original’s accuracy. (DarwinAI’s platform tests the algorithms automatically with training and test datasets.) Tuneable settings let users tailor the networks for particular tasks or generate new ones informed by finer requirements.
There’s also a transparency component. DarwinAI’s platform has a built-in “explainer tool” that shows how the optimized algorithms arrive at their decisions and whether any data in the training set might have biased the results. In light of studies revealing that popular smart speakers are 30 percent less likely to understand foreign accents and that some facial recognition algorithms perform measurably worse on African-American faces, that’s encouraging news.
The results so far have been promising. In one test, the system generated a neural network 4.5 times more efficient than one produced by Google’s AutoML and Learn2Compress platforms. It also created an optimized version of DetectNet, Nvidia’s object detection network, 12 times smaller and 4 times faster than the original.
DariwinAI isn’t naming any of its clients just yet, but it revealed that one of them — a U.K. bank — used its platform to implement a specialized fraud detection model that cut its cloud spend by 70-80 percent. The explainer tool, meanwhile, helped it surface a surprising pattern: Hackers were targeting bank branches using Chrome more than any other browser.
“Our initial results are noteworthy,” said Dr. Alexander Wong, University of Waterloo professor and DarwinAI cofounder, “but it is important that we innovate relentlessly as we respond to our customers’ needs. For me, success is Generative Synthesis enabling deep learning solutions across a variety of verticals — impactful applications that we can only dimly imagine today.”
In addition to Dr. Wong, who’s also a founding member of Waterloo’s AI Institute and Canada Research Chair, DarwinAI’s executive team includes software entrepreneur Sheldon Fernandez and former McKinsey & Company consultant Arif Virani.