Ctrl-labs, a New York startup that’s developing a device capable of translating electrical muscle impulses into digital signals, today announced that it has raised $28 million in a funding round led by GV, Google’s venture capital arm, with participation from a swath of other heavy hitters including Amazon’s Alexa Fund, Lux Capital, Spark Capital, Matrix Partners, Breyer Capital, and Fuel Capital. It comes after Ctrl-labs secured $28 million in financing last May, and it brings the company’s total raised to $67 million.

CEO Thomas Reardon said the fresh capital will be put toward growing the company’s recently opened research and development lab in San Francisco, and toward supporting its commercial partnerships. He also says it’ll be used to build and distribute Ctrl-labs’ developer kit — Ctrl-kit — which it unveiled at Slush in Helsinki, Finland in December.

Ctrl-kit is currently in preview for select partners, and is expected to begin shipping by the end of Q1 2019.

“Like the developers and creators we hear from, we feel fundamentally dissatisfied with the pervading technologies of the last century,” he added. “Our objective with Ctrl-kit is to give the industry’s most ambitious minds the tools they need to reimagine the relationship between humans and machines.”


AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

Much has changed since Ctrl-labs began prototyping its neural interface technology in 2017, Adam Berenzweig, director of research and development, told VentureBeat in an earlier phone interview. It’s no longer tethered by wires to a Raspberry Pi, as was the case with previous incarnations, and the wireless radios are now packed tightly into a wrist-worn enclosure that’s the size of a “large watch” and wired to a component with electrodes that sits further up the arm. Furthermore, latency has been reduced, and the algorithms that predict intent are “significantly better” than they were before.


Above: Ctrl-labs’ Ctrl-kit.

Image Credit: Ctrl-labs

On the software side of the equation, the accompanying SDK is “more mature,” with built-out JavaScript and TypeScript toolchains and new prebuilt demos that give an idea of the hardware’s capabilities. Programming is largely done through WebSockets, which provide a full-duplex communications channel.

“We’re at the point of the launch where … we want to get it out [to] developers,” Berenzweig said.

The final version of Ctrl-kit will be in one piece, and it won’t be an entirely self-contained affair. The developer kit has to be wirelessly tethered to a PC for some processing, but the goal is to get to the point where overhead is such that it can run on wearable system-on-chips.

The underlying tech remains the same. Ctrl-kit leverages differential electromyography (EMG) to translate mental intent into action, specifically by measuring changes in electrical potential caused by impulses traveling from the brain to hand muscles. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and with the help of AI algorithms trained using Google’s TensorFlow distinguish between the individual pulses of each nerve.

The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.

As for what Ctrl-labs expects its early adopters to build with Ctrl-kit, video games top the list — particularly virtual reality games, which Berenzweig believes are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.) And not too long ago, Ctrl-labs demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop with their fingertips.

It remains to be seen if Ctrl-labs can succeed where others have failed. In October, Amazon-backed wearables company Thalmic Labs killed its gesture- and motion-guided Myo armband, which similarly tapped the electrical activity in arm muscles to control devices.

Still, it’s managed to attract talent like former Apple autonomous systems engineer Tarin Ziyaee, who’s heading up dev at Ctrl-labs’ San Francisco office, and Anthony Moschella, previously vice president of product at Peloton and MakerBot. Moreover, investors like Erik Nordlander, general partner at GV, are convinced that Ctrl-labs’ early momentum — in addition to the robustness of its developer tools — will help it gain an early lead in the brain-machine interface race.

“Ctrl-labs’ development of neural interfaces will empower developers to create novel experiences across a wide variety of applications,” he said. “The company has assembled a team of top neuroscientists, engineers, and developers with deep technology backgrounds, creating human-computer interactions unlike anything we have seen before.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.