Anyone who has thought about scaling a business or building a network is familiar with a dynamic referred to as the “network effect.” The more buyers and sellers who use a marketplace like eBay, for example, the more useful it becomes. Well, the data network effect is a dynamic in which increased use of a service actually improves the service, such as how machine-learning models generally grow more accurate as a result of training from larger and larger volumes of data.
Autonomous vehicles and other smart robots rely on sensors that generate increasingly massive volumes of highly varied data. This data is used to build better AI models that robots rely on to make real-time decisions and navigate real-world environments.
The confluence of sensors and AI at the heart of today’s smart robots generate a virtuous feedback loop, or what we might call a “robotics network effect.” We are currently on the verge of the tipping point that will create this network effect and transform robotics.
The rapid evolution of AI
To understand why robotics is the next frontier of AI, it helps to step back and understand how AI itself has evolved.
Machine intelligence systems developed in recent years are able to leverage huge amounts of data that simply didn’t exist in the mid-1990s when the internet was still in its infancy. Advances in storage and compute have made it possible to quickly and affordably store and process large amounts of data. But these engineering improvements alone can’t explain the rapid evolution of AI.
Open source machine learning libraries and frameworks have played a quiet but equally essential role. When the scientific computing framework Torch was released 15 years ago under a BSD open source license, it included a number of algorithms still commonly used by data scientists, including deep learning, multi-layer perceptrons, support vector machines, and K-nearest neighbors.
More recently, open source projects like TensorFlow and PyTorch have made valuable contributions to this shared repository of knowledge, helping software engineers with diverse backgrounds develop new models and applications. Domain experts require a vast amount of data to create and train these models. Large incumbents have a huge advantage because they can leverage existing data network effects.
Sensor data and processing power
Light detection and ranging (lidar) sensors have been around since the early 1960s. They’ve since found application in geomatics, archaeology, forestry, atmospheric studies, defense, and other industries. In recent years, lidars have become the preferred sensors for autonomous navigation.
The lidar sensor on Google’s autonomous vehicles generates 750MB of data per second. The 8 computer vision cameras on board collectively generate another 1.8GB per second. All this data has to be crunched in real time, but centralized compute (in the cloud) simply isn’t fast enough for real-time, high-velocity situations. To solve for this bottleneck, we’re decentralizing compute by pushing processing to the edge or, in the case of robots, on board.
The current solution for most of today’s autonomous vehicles is to use two on-board “boxes,” each of which is equipped with an Intel Xeon E5 CPU and 4 to 8 Nvidia K80 GPU accelerators. At peak performance, this consumes over 5000W in electricity. Recent hardware innovations like Nvidia’s new Drive PX Pegasus, which can compute 320 trillion operations per second, are beginning to more effectively address this bottleneck.
AI on the edge
Our ability to both process sensor data and fuse various modalities of data together will continue to drive the evolution of smart robots. In order for this sensor fusion to happen in real time, we need to put our machine learning and deep learning models on the edge. Of course, decentralized AI compounds the demands on decentralized processors.
Thankfully, machine learning and deep learning compute is becoming much more efficient. Graphcore’s intelligent processing units (IPUs) and Google’s tensor processing units (TPUs), for example, are lowering the cost and accelerating the performance of neural networks at scale.
Elsewhere, IBM is developing neuromorphic chips that mimic brain anatomy. Prototypes use a million neurons, with 256 synapses per neuron. The system is particularly well suited to interpret sensory data because it’s designed to approximate the way the human brain interprets and analyzes perceptual data.
The result of all this data coming from sensors means we are on the verge of a robotics network effect, a shift that will have dramatic implications for AI, robotics, and their various applications.
A new world of data
The robotics network effect will enable new technologies and machines to act not only on larger volumes and velocities of data, but also on expanding varieties of data. New sensors will be able to detect and capture data that we might not even be thinking about, bound as we are by the limited nature of human perception. Machines and smart devices will contribute enriched data back onto the cloud and to neighboring agents, informing decision making, enhancing coordination, and playing a vital role in continuous model improvements.
These advancements are coming more quickly than many realize. Aromyx, for example, uses receptors and advanced machine learning models to build sensor systems and a platform for the digital capture, indexing, and search of scent and taste data. The company’s EssenceChip is a disposable sensor that outputs the same biochemical signals that the human nose or tongue sends to the brain when we smell or taste a food or beverage.
Open Bionics is developing robotic prostheses that rely on haptic data collected from sensors within the arm socket to control hand and finger movements. This non-invasive design leverages machine learning models to translate fine muscle tension sensed by the electrodes into complex motor response in the bionic hands.
Sensor data will be instrumental in pushing the boundaries of AI. AI systems will simultaneously expand our ability to process data and discover creative uses for this data. Among other things, this will inspire new robotic form factors capable of collecting even broader modalities of data. As we advance our ability to “see” in new ways, the everyday world around is rapidly emerging as the next great frontier of discovery.
Alex Housley is the founder and CEO of Seldon, the machine learning deployment platform that gives data science teams new capabilities around infrastructure, collaboration, and compliance.
Santiago Tenorio is a general partner at Rewired, a robotics-focused venture studio investing in applied science and technologies that advance machine perception.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more