Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Cutting-edge drones from companies like DJI and Parrot have no trouble navigating obstacle-strewn environments, but when faced with a never-before-seen landscape like dense woods or a maze, they have a tougher time reaching goal destinations autonomously. That’s why scientists at Intel Labs and Mexico’s Center for Research and Advanced Studies of the National Polytechnic Institute recently investigated a framework for self-guided drone navigation in “cluttered” unknown environments.
They describe their work in a paper (“Autonomous Navigation of MAVs in Unknown Cluttered Environments“) published on the preprint server Arxiv.org. In both qualitative and quantitative tests involving Intel’s Ready to Fly drone kit, they say that their real-time, on-device family of algorithms achieve state-of-the-art performance.
“Autonomous navigation in unknown cluttered environments is one of the fundamental problems in robotics with applications in search and rescue, information gathering and inspection of industrial and civil structures, among others,” wrote the coauthors. “Although mapping, planning, and trajectory generation can be considered mature fields considering certain combinations of robotic platforms and environments, a framework combining elements from all these fields for [drone] navigation in general environments is still missing.”
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
The team’s algorithmic framework — which is designed for drones equipped with 3D sensors and odometry modules — comprises three components: (1) an algorithm that produces maps of the disparity between measurements obtained from the drone’s depth sensor, (2) a path generation model that takes into account field-of-view constraints on space that’s assumed to be safe for navigation, and (3) a model that generates robust motion plans. At the mapping stage, algorithms compute a point cloud from the disparity depth images and the odometry and add it to a map representation of the drone’s occupied space. An exploration action is generated during the aforementioned path planning, and in a subsequent phase, the framework creates a trajectory that drives the robot from its current state to the next planned action. All the while, the models attempt to ensure that the drone’s yaw orientation — the way it twists or oscillates around a vertical axis — is aligned with the direction of motion, chiefly by employing a velocity-tracking yaw approach.
To test their framework’s robustness, the researchers performed experiments both in four real-world environments (a 3D maze, an industrial warehouse, a cluttered lab, and a forest environment) and in virtual environments using Robotic Operating System Kinetic, a popular open source robotics middleware. They report that in one of the tests, it achieved a motion time of 3.37 milliseconds compared with the benchmark algorithms’ 103.2 milliseconds and 35.5ms, and that its average mapping time was 0.256 milliseconds against 700.7ms and 2.035ms.
It wasn’t all smooth sailing, of course. The team notes that their algorithm tended to generate slightly larger paths than the benchmarks against which they were tested, and that it wasn’t able to reach goal destinations in a maze simulation with very tight spaces (which they chalk up to a failure to account for yaw dynamical constraints in the planning stage). But they say their work could lead to systems that integrate trajectory tracking and prediction of dynamic obstacles, which might allow future drones to navigate more effectively in crowded environments.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.