What if you could control robots with your biceps and triceps? That’s the question posed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in a newly published paper and accompanying blog post. Their answer is a system — RoboRaise — that enables human orchestrators to perform tasks with machines by moving their arm muscles.

The work is scheduled to be presented at the International Conference on Robotics and Automation this week in Montreal, Canada.

“Our approach to lifting objects with a robot aims to be intuitive and similar to how you might lift something with another person — roughly copying each other’s motions while inferring helpful adjustments. The key insight is to use nonverbal cues that encode instructions for how to coordinate, for example to lift a little higher or lower,” graduate student and lead author on the paper Joseph DelPreto said in a statement.


Toward that end, DelPreto and colleagues’ project — which builds on an existing system that let users correct robot mistakes with brainwaves and hand gestures — incorporates bicep and tricep electromyography (EMG) sensors that monitor muscle activity, along with an algorithm that parses detected neuron-muscle firing. Slightly tensing or relaxing an arm moves the team’s Baxter humanoid robot up or down, while gesturing up or down moves the robot farther away or holds a pose.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

The team notes that because the gesture-detecting AI system is trained on data from previous users, it requires only “minimal calibration” each time it’s used. New users tense and relax their arm a few times, lift a weight to a few heights, and they’re good to go. “Using muscle signals to communicate almost makes the robot an extension of yourself that you can fluidly control,” said DelProto.

In the course of three experiments — one where the robot stood stationary, another where the robot moved in response to gestures but didn’t lift objects, and a third where the robot and wearer lifted objects together — the researchers found that visual feedback from the robot boosted the accuracy of the achieved height. In a separate test that tasked users with assembly tasks, the robot was able to lift both rigid and flexible objects onto bases.

The team leaves to future work increasing the system’s degrees of freedom by monitoring additional muscle with different sensor types. Already, they’ve prototyped a version that uses bicep and tricep levels to tell the robot how stiffly an object’s being held, which they say could enable the robot to fluidly drag an object around or rigidly pull it taut.