Is it really possible to distinguish among cars, trucks, and pedestrians with radar data alone? Absolutely, and it’s all thanks to AI. In a newly published paper on the preprint server Arxiv.org (“Radar-based Road User Classification and Novelty Detection with Recurrent Neural Network Ensembles“), scientists at Daimler and the University of Kassel in Kassel, Germany describe a novel machine learning framework that can categorize individual “traffic participants” — including hidden object classes that weren’t previously known to it — from radar data alone. They claim that it could be of particular usefulness to the driverless car industry, where object detection remains an acute area of interest.
“The overall classification performance can be improved when compared to previous methods and, additionally, novel classes can be identified much more accurately,” wrote the coauthors. They further explain that radar is one of the few sensors that can directly obtain a velocity measurement from multiple objects within view, and they note that it’s more robust to adverse weather conditions such as fog, snow, or heavy rain compared with other sensors. They also point out, though, that it’s not perfect: Radar has a relatively low angular resolution compared with other sensors, leading to sparse data representations.
The team’s solution is a collection of classifiers comprising 80 long short-term memory (LSTM) cells, or special recurrent neural networks (layered mathematical functions that mimic biological neurons) capable of learning long-term dependencies. Uniquely, they use only a dynamic subset of 98 total features — specifically statistical derivations in range, angle, amplitude, and Doppler; geometric features; and features concerning the distribution of Doppler values — to identify key differences among objects, conferring the advantage of low computational overhead during model training and inference.
To train them, the team sourced a data set containing more than 3 million data points on 3,800 instances of moving road users. Samples were acquired with four radar sensors mounted on the front half of a test vehicle (with ranges of roughly 100 meters), and the trained classifiers slotted detected objects into one of six buckets: pedestrian, pedestrian group, bike, car, truck, and garbage. The label “pedestrian group” was attributed to multiple pedestrians who couldn’t be clearly separated in the data, while the “garbage” and “other” classes consisted of wrongly detected artifacts and road users which didn’t fit into any of the aforementioned groups (like motorcyclists, scooters, wheelchair users, cable cars, and dogs).
So how’d the ensemble of classifiers fare? According to the researchers, they were 91.46% accurate on average in categorized objects, and even more accurate when they shared the same feature set. Most of the classification errors occurred between the pedestrian and pedestrian group classes, apparently — confusion that the researchers attribute to the volume of wheelchair users and scooter drivers present in the corpus.
“[T]he proposed structure allows to give new insights in the importance of features for the recognition of individual classes which is crucial for the development of new algorithms and sensor requirements,” wrote the coauthors. “[T]he ability to recognize objects from classes other than the ones seen in the training data is a vital part towards autonomous driving.”
They leave to future work enhancing the current results by applying high-resolution signal processing techniques that might increase the radar’s resolution in range, angle, and Doppler.