Robotic limbs aren’t a complete solution for amputees. Technicians typically spend hours manually adjusting them until they mesh perfectly with human wearers’ gaits, and hours more teaching those wearers how to walk with them independently, without stumbling. The costs really add up — tuning sessions require visits to clinics. But the good news is, thanks to artificial intelligence (AI), a better way might be on the horizon.

In a recent paper published in the journal IEEE Transactions on Cybernetics, researchers at the North Carolina State University and the University of North Carolina describe a system that applies reinforcement learning — an artificial intelligence (AI) training technique that uses a system of rewards to drive agents toward certain goals — to the task of tuning a robotic knee. In one test, the AI system they developed took just 10 minutes to help a prosthetic wearer walk naturally on level ground.

“Our body does weird things when we have a foreign object on our body,” Jennie Si, professor of electrical, computer, and energy engineering at Arizona State University and coauthor of the paper, told IEEE Spectrum. “In some sense, our computer reinforcement learning algorithm learns to cooperate with the human body.”

So how’s it work? As the robotic limb is exercised, the AI model takes into account various parameters that define the relationship between force and motion in using the limb — like the stiffness of a robotic joint, for instance, or the range of vertical motion allowed in a foreleg. The baselines are such that wearers can walk relatively comfortably, but not entirely smoothly.

In the researchers’ experiments, a dozen parameters required adjustments. Training data was recorded from amputees walking in brief sessions (15 to 20 minutes) and supplied to the algorithm, which over time learned to recognize patterns from sensors embedded in the prosthetics. In the interest of safety, the researchers imposed constraints to avoid situations that could cause the wearer to fall. But the system arrived at parameter settings for stable, smooth walking patterns on its own.

It’s by no means perfect — the AI system can’t “know” whether its adjustments are improving or making worse a particular walking pattern, coauthor Helen Huang, a professor in biomedical engineering at both North Carolina State University and the University of North Carolina, told IEEE Spectrum.

“If you wanted to make this clinically relevant, there are many, many steps that we have to go through before this can happen,” she said. “So far it’s really just to show it’s possible — by itself that’s very, very exciting.”

But the team’s already plotting out future work. They intend to train the algorithm to handle vertical movement, such as steps, and they hope to create a wireless version of the prosthetic that could be used to collect training data outside of lab sessions.