Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.
Researchers from Google Brain, Intel AI Lab, and UC Berkeley have created Motion2Vec, an AI model capable of learning how to do tasks associated with robotic surgery — such as suturing, needle-passing, needle insertion, and tying knots — from training with surgery videos. To test results, the model was applied with a two-armed da Vinci robot passing needle through cloth in a lab.
Motion2Vec is a representation learning algorithm trained using semi-supervised learning, and it follows in the tradition of similarly named models like Word2Vec and Grasp2Vec, trained with knowledge found in an embedding space. UC Berkeley researchers previously used YouTube videos to train agents to dance, do backflips, and perform a range of acrobatics, and Google has used video to train algorithms to do things like generate realistic video or predict depth using mannequin challenge videos from YouTube.
The researchers say their work shows that video robotics used in surgery can be improved by feeding them expert demonstration videos to teach new robotic manipulation skills. “Results suggest performance improvement in segmentation over state-of-the-art baselines, while introducing pose imitation on this dataset with cm error 0:94 in position per observation respectively,” the paper reads.
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
Details about Motion2Vec were published last week on preprint repository arXiv and presented at the IEEE International Conference on Robotics and Automation (ICRA). Videos of just eight human surgeons controlling da Vinci robots from the JIGSAWS data set taught the algorithm motion-centric representations of manipulation skills via imitation learning. JIGSAWS, which stands for the JHU-ISI Gesture and Skill Assessment Working Set, brings together video from Johns Hopkins University (JHU) and Intuitive Surgery, Inc (ISI).
“We use a total of 78 demonstrations from the suturing dataset,” the paper reads. “The suturing style, however, is significantly different across each surgeon.”
Other notable works from ICRA, which took place online instead of in-person in Paris, include gait optimization for lower body exoskeletons and a Stanford lab that envisions using AI to leverage public transportation to extend delivery routes for hundreds of drones.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.