You can learn a lot about people from the gadgets on their person, history has taught us, and that includes their movements. In a paper published on the preprint server Arxiv.org (“A Semi-Supervised Deep Residual Network for Mode Detection in Wi-Fi Signals“), researchers at Ryerson University in Toronto, Canada describe a neural network (i.e., layers of mathematical functions modeled after biological neurons) that can derive insights about smartphone owners — specifically whether they choose to walk, bike, or drive around a few city blocks — from Wi-Fi data.

Wi-Fi has a number of advantages over commonly used modality classification schemes, the researchers point out. It’s ubiquitous, for one, and it works reliably indoors even in “challenging” environments like urban highrises. “Due to their … pervasive nature, Wi-Fi networks have the potential to collect large-scale, low-cost, and disaggregate data on multimodal transportation,” the paper’s authors explained. “In this study, we develop a … framework to utilize Wi-Fi communications obtained from smartphones for the purpose of transportation mode detection.”

The team’s neural network architecture of choice was a deep residual network, an AI originally introduced for image recognition that incorporates shortcuts, or skip connections, to jump over some layers of functions in the network. (It’s inspired by the cerebral cortex’s pyramidal cells.) In this case, the algorithm was semi-supervised, meaning it relied on labeled data (modes of transportation) to suss out patterns.

To compile a dataset, the researchers sourced a system — UrbanFlux — consisting of Wi-Fi detectors with a 50-meter radius deployed in “a congested urban area” of downtown Toronto. (The locations were chosen for their mix of bike lanes, sidewalks, two- and one-lane streets, and streetcars, they wrote.) Over the course of several days in June 2017 and August 2018, they recorded the MAC addresses, signal strengths, and times of connections of individual smartphones belonging to four volunteers who went around a designated loop for 10 rounds, splitting their time between walking, biking, and driving. In the end, they completed 2,838 trips.

After training the AI system on a portion of the data from which they managed to extract 15 features (based on time and speed, signal strength, and number of connections), the researchers validated it on a separate test set. It successfully predicted all three modes of transportation with an accuracy of over 80 percent, they report — 81.8 percent for walking, 82.5 percent for biking, and 86.0 percent for driving. Driving had the most accurate recall and precision, while biking had the lowest — potentially because biking and driving share a number of features on which the AI system likely picked up, they posit.

“[T]his method can be used … by city decision makers, operators, and planners to have a better understanding of users’ travel habits and their trends over time,” the paper’s authors wrote. “Transportation mode detection can also be useful in urban ubiquitous sensing, as it gives insight into energy consumption, pollution tracking and prediction and burned calorie estimation.”

They leave to future work extending the model to different modes of transportation like subways, streetcars, and buses, and incorporating real-time data from transit schedules.