Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
In a pair of recently published technical papers, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) propose new applications of soft robotics — the subfield dealing with machines made from tissue-like materials — that aim to tackle the challenge of grasping objects of different shapes, weights, and sizes. One builds on an existing work that employs a cone-shaped origami-inspired structure designed to collapse in on objects, while the other gives a robotic gripper more nuanced, humanlike senses in the form of LEDs and two cameras.
Despite the promise of soft robotics technologies, they’re limited by their lack of tactile sense. Ideally, a gripper should be able to feel what it’s touching and sense the positions of its fingers, but most soft robots can’t. The MIT CSAIL teams’ approaches ostensibly fix that.
“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” said MIT professor and CSAIL director Daniela Rus in a statement.
Last year, scientists at MIT CSAIL and Harvard demonstrated a gripper design capable of lifting a wide range of household objects. The team’s hollow, cone-shaped device comprises three parts that together surround items as opposed to clutching them. In one experiment where the gripper was mounted on a robot to test its strength, it managed to lift and grasp objects that were 70% of its diameter and up to 120 times its weight without damaging them.
A new MIT CSAIL team thought there was room for improvement in the existing gripper design. To give it versatility and adaptability closer to that of a human hand, they added tactile sensors made from latex bladders (balloons) connected to pressure transducers. The sensors let the gripper pick up objects as delicate as potato chips while classifying them, enabling it to better understand what it’s grasping.
The silicon-adhered sensors — one of which is on the outer circumference of the gripper to capture its changing diameter, while the other four are attached to the inside to measure contact forces — experience internal pressure changes upon force or strain. The team measured each of these changes, using them to train an object-detecting algorithm running on an Arduino Due.
In 10 experiments during which the sensors captured and averaged together 256 samples (at a rate of 20Hz), the algorithm classified some objects — including a bottle, an apple, a box, and a Pringles can — with 100% accuracy. Other objects it classified with between 80% to 90% accuracy, including another bottle, a scrubber, a can, and a bag of cookies. (One bottle was misidentified as a can, which had a similar profile, and the toothbrush was misclassified as a box.)
In separate experiments, the researchers tested the sensor-equipped grippers’ ability to grasp delicate objects and detect when those objects might be slipping. They observed that the success rate over the course of 100 trials varied depending on the rate of the slip, with upwards of 100% success when the slip rates were higher. And they report that, when tasked with picking up 20 randomly selected kettle chips, the gripper grasped 80% without damage.
In the second paper, a CSAIL team describes GelFlex, a gripper consisting of a soft, transparent silicone finger with one camera near the fingertip, a second camera near the middle, reflective ink on the front and side, and LED lights affixed to the back.
The cameras, which are equipped with fisheye lenses, capture the finger’s deformations in great detail, enabling AI models trained by the team to extract information like bending angles and the shape and size of objects being grabbed. These models and GelFlex’s design allow it to pick up various items such as a Rubik’s cube, a DVD case, or a block of aluminum. During experiments, the average positional error while gripping was less than 0.77 millimeters — better than that of a human finger — and the gripper successfully recognized various cylinders and boxes 77 out of 80 times.
In the future, the team hopes to improve the proprioception (i.e., sense of self-movement) and tactile sensing algorithms, while utilizing vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending. They’re scheduled to present their research virtually at the 2020 International Conference on Robotics and Automation, alongside the other gripper team.
“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” lead author on the GelFlex paper Yu She said in a statement. “By constraining soft fingers with a flexible exoskeleton, and performing high resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more