Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Facebook this morning announced ReSkin, an open source touch-sensing synthetic “skin” created by researchers at the company in collaboration with Carnegie Mellon University. Leveraging machine learning and magnetic sensing, ReSkin is designed to offer an inexpensive, versatile, durable, and replaceable solution for long-term use, employing an unsupervised learning algorithm to help auto-calibrate the sensor.
Alongside ReSkin, and perhaps timed in effort to distract from exposes detailing its internal turbulence, Facebook also today outlined its broader progress in developing hardware, simulators, libraries, benchmarks, and datasets for touch sensing, which the company says form the foundation for AI systems that can understand and interact through touch.
“We typically think of touch as a way to convey warmth and care, but it’s also a key sensing modality for perceiving the world around us,” Facebook research scientist Roberto Calandra and hardware engineer Mike Lambeta said in a blog post. “Touching provides us with information not discernible through any other sense, for example about the temperature of a matter, its texture and weight, and even, sometimes, its state.”
History of touch
Tactile sensing is an emerging field in robotics that aims to understand and replicate human-level touch in the physical world. The goal is to make robots more efficient, safer, and gentler by enabling them to learn from — and use — touch on their own, in environments from homes to factory floors.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Facebook has been developing tactile sensors for the past several years, largely focused on the task of robotic grasping. In 2020, researchers at the company detailed Digit, a high-resolution, low-cost, compact tactile sensor designed to be mountable on multi-fingered robot hands.
Digit has a plastic body with an enclosure that’s conducive to both 3D printing and injection molding. Three RGB LEDs provide illumination over the elastomer gel surface, which was custom-designed using a silicon-and-acrylic manufacture process that ostensibly balances ruggedness with sensitivity. A camera and the gel are mounted to the body using “press fit” connections so that any component (e.g., the elastomer) can be swapped out, and the housing is replaceable to allow for different lens focal lengths.
In experiments, the team used it to enable a robotic hand to hold and manipulate a glass marble between the thumb and middle finger. In the course of 50 trials, the hand dropped the marble about 25% of the time. However, the researchers attribute this to inaccuracies and noisy data as opposed to flaws in Digit’s design.
The manufacturing files for Digit’s plastics enclosure, gel, and electronics were open-sourced on GitHub last June, as well as the firmware binary for programming. And Facebook today announced that it will commercially manufacture Digit in partnership with MIT spinout GelSight.
According to Facebook researcher manager Abhinav Gupta and postdoctoral research fellow Tess Hellebrekers, the goal behind ReSkin is to provide a source of contact data that could be helpful in advancing AI across a range of touch-based tasks, like object classification. AI models with tactile sensing skills developed through the use of ReSkin could potentially work in health care settings or grasp soft objects, Gupta and Hellebrekers say. And because ReSkin can be integrated with other sensors to collect visual, sound, and touch data and create multimodal datasets, ReSkin could also help build more physically realistic models of the world than was previously possible.
“Our sense of touch helps us navigate the world around us. With it, we can gather information about objects — such as whether they’re light or heavy, soft or hard, and stable or unstable — that we use to accomplish everyday tasks from putting on our shoes to preparing a meal,” Gupta and Hellebrekers, who contributed to the ReSkin project, said in a blog post. “AI today effectively incorporates senses like vision and sound, but touch remains an ongoing challenge. That’s in part due to limited access to tactile sensing data in the wild. As a result, AI researchers hoping to incorporate touch into their models struggle to exploit richness and redundancy from touch sensing the way people do.”
ReSkin — a deformable elastomer with embedded magnetic particles — is even less expensive, costing under $6 each per unit for a 100-unit manufacturing run compared with Digit’s $15 for a 1,000-unit run. It’s 2 millimeters to 3 millimeters thick versus Digit’s 18 millimeters and can be used for more than 50,000 interactions, Gupta and Hellebrekers say, which make ReSkin ideal for form factors from robot hands and tactile gloves to arm sleeves and even dog shoes.
“ReSkin can also provide high-frequency three-axis tactile signals for fast manipulation tasks like slipping, throwing, catching, and clapping. And when it wears out, it can be easily stripped off and replaced with a new one,” they explained in the blog post.
Neither Digit nor ReSkin are the first touch-sensitive sensors of their kind, it’s worth noting. Others include OmniTact and GelFlex, a robotic gripper out of MIT’s Computer Science and Artificial Intelligence Laboratory with nuanced, humanlike senses in the form of LEDs and two cameras. The National University of Singapore has also developed touch-sensing robotic “skin” using a prototype chip from Intel.
But “soft skins,” as they’re called, have historically proven difficult to mass-manufacture due to variations that occur while using them. Each sensor has to go through a calibration routine to determine its individual response. Adding to the challenge, the materials change over time — and differently, depending on how they’re used — meaning that the calibration must also adapt to these changes on its own.
ReSkin overcomes these hurdles by removing the need for an electrical connection between the soft materials and measurement electronics. The sensor’s magnetic signals rely on proximity — the electronics only need to be nearby, not connected. Beyond this, ReSkin taps a mapping function trained on data from multiple sources to make it more generalizable and “robust” than traditional mapping functions. And the sensor uses an unsupervised model to fine-tune automatically — and continuously — using small amounts of unlabeled data.
With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. That’s as opposed to “supervised” learning, where an algorithm is trained on input data annotated for a particular output until they can detect the underlying relationships. Unsupervised machine learning systems like those running on ReSkin must teach themselves to classify the unlabeled data, processing the data to learn not from annotations but from the data’s inherent structure.
“Instead of providing ground-truth force labels, we can use relative positions of unlabeled data to help fine-tune the sensor’s calibration. For example, we know that out of three contact points, the two that are physically closer to each other will have a more similar tactile signal,” Gupta and Hellebrekers explained. “Taken together, ReSkin opens up a diverse range of versatile, scalable, and inexpensive tactile sensation modules that aren’t possible with existing systems. Existing camera-based tactile sensors require a minimum distance between the surface and the camera resulting in much bulkier designs. By comparison, ReSkin can be incorporated as a layer over both human and robot hands and arms.”
To demonstrate ReSkin’s usefulness, Facebook researchers conducted several experiments showing that it can be applied for in-hand manipulation, contact localization, and other force measurement tasks. For example, Facebook researchers used ReSkin to track the magnitude and direction of applied force during a dog’s resting, walking, and running by placing it and a circuit board inside the sole of a dog shoe.
“Our research into generalizable tactile sensing led to today’s ReSkin, which is low-cost, compact, and long-lasting. With skin that’s as easy to replace as peeling and putting on a new bandage, it can be used immediately, and our learned models perform strongly on new skins out of the box. It’s a powerful tool that will help researchers build AI models that will power abroad diversity of applications,” Gupta and Hellebrekers wrote.
Simulation, dataset, and benchmarks
To support hardware like Digit and ReSkin, Facebook this summer open-sourced Tacto and PyTouch, a library for the PyTorch machine learning framework. Tacto is a simulator for vision-based tactile sensors, while PyTouch is a collection of machine learning models and functionality for touch sensing.
Tacto can render touch readings at hundreds of frames per second and can be configured to simulate different sensors, including Facebook’s own Digit. As Calandra and Lambeta point out, simulators play an important role in prototyping, debugging, and benchmarking robotics because they allow for testing without the need to perform costly experiments. “In addition to the benefit of being able to run faster experiments in simulation, challenges with getting the right hardware as well as reducing wear and tear on hardware surfaces in tactile sensing make simulations even more important with touch sensing,” they said.
As for PyTouch, it provides basic capabilities for sensors such as detecting touch and slip, and estimating object pose. The plan is to integrate it with real-world sensors and Tacto to enable validation of models as well as “Sim2Real” capabilities — the ability to transfer concepts trained in simulation to real-world applications. Facebook also envisions PyTouch letting the robotics community use models dedicated to touch sensing “as a service,” where researchers can connect a sensor, download a pre-trained model, and use it as a building block in their application.
“We are … currently studying Sim2Real transfer for training PyTouch models in simulation and deploying them on real sensors as a way to quickly collect data sets and train models,” Calandra and Lambeta said. “Collecting large-scale data sets containing large amounts of data can happen in minutes in simulation, whereas collecting data with a real sensor requires time and a person to physically probe objects. Finally, we plan to explore Real2Sim methods to better tune the simulator from real-world data.”
There’s a laundry list of blockers to overcome in tactile sensing, including hardware limitations, a lack of understanding of which touch features are used for particular tasks, and an absence of widely accepted benchmark tests. But it’s Facebook’s assertion that improvements in touch sensing, however incremental, can help advance AI and enable researchers to build robots with enhanced functionalities.
In a small step toward this, the company has released the design, documentation, code, and base model for ReSkin to help researchers use the sensor without having to collect or train their own datasets.
“It can also unlock possibilities in AR/VR, as well as lead to innovations in industrial, medical, and agricultural robotics,” Calandra and Lambeta said. “We’re working toward a future where every single robot may come equipped with touch sensing capabilities.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.