Facebook AI researchers today revealed details about their robot training efforts, including an initiative to teach a six-legged hexapod robot to walk.

“Our goal is to reduce the number of interactions the robot needs to learn to walk, so it takes only hours instead of days or weeks,” researchers wrote in a blog postblog post. “The techniques we are researching, which include Bayesian optimization, as well as model-based reinforcement learning, are designed to be generalized to work with a variety of different robots and environments.”

“They could also help improve sample efficiency of reinforcement learning for other applications beyond robotics, such as A/B testing or task scheduling,” the post said.

Teaching robotic systems how to move is a long-seated challenge that Boston Dynamics Spot Mini has tackled and Google’s DeepMind took on in 2017 when it trained a bipedal agent to move in order to make more flexible systems.

Facebook’s method of teaching a robot to walk involves placing sensors in the joints of each leg of the hexapod and the use of self-supervised reinforcement learning, a way to train AI through repeated simulations that does not require task-specific training data.

In April, Facebook AI researchers working with New York University published research that detailed efforts to teach a robotic arm how to be curious about the physical world. In this work, the AI system is rewarded for trying new things while optimizing its action sequences to reduce model uncertainty.

A month earlier, Facebook AI researchers, in tandem with the University of California, Berkeley, developed a way for robots to mimic the sense of touch in order to extend their ability to move and manipulate objects beyond computer vision. The approach applies a self-supervised reinforcement learning model first developed for video to let players roll a ball, move a joystick, and roll 20-sided die.

Both research papers were accepted for publication at the International Conference on Robotics and Automation (ICRA), which takes place this week in Montreal.

Facebook expanded its robotics research efforts at labs in Pittsburgh, New York, and its company headquarters in Menlo Park, California last year.

The company has accelerated its robotics efforts in part because of researchers’ growing willingness to tackle challenging tasks, like teaching logical reasoning.

“There are problems that pop up in robotics that don’t really pop up in other application domains that force people to really confront the real problems that we’re facing with AI,” recent Turing Award winner and Facebook chief AI researcher Yann LeCun told VentureBeat last year. “That’s [researchers’] primary interest in application, so if we don’t work on robotics, we are basically shutting ourselves off from access to talented researchers who want to work on this topic.”

The robotics research follows a number of similar efforts in the space, including the release last month of Grasp2Vec, an AI system that teaches robots to grasp and throw things.

Microsoft recently introduced in limited preview the first part of a robotics and AI platform that is based in part on tech from AI startup Bonsai and focuses on the transfer of intelligence from human professionals to robotic hardware. Microsoft also brought the Robotic Operating System (ROS) to Windows 10 last year.

Nvidia opened a robotics lab in Seattle earlier this year, and Amazon will host space and robotics conference re:Mars in Las Vegas in June.

The past year has also been marked by the high-profile failures of a number of robotic startups, including Mayfield Robotics and Akni, which shut down in April after burning through near $200 million, and the death of home robot Jibo.

Rethink Robotics, maker of Baxter, also shut down last year after raising nearly $150 million.