Activity-detecting wearables aren’t exactly novel — the Apple Watch, Fitbit’s lineup of fitness wearables, and countless smartwatches running Google’s WearOS interpret movements to determine whether you’re, say, jogging rather than walking. But many of the algorithmic models underlying their features need lots of human-generated training data, and typically they can’t make use of that data if it isn’t labeled by hand.
Fortunately, researchers at the University of Massachusetts Amherst have developed a labor-saving solution they say could save valuable time. In a paper published on the preprint server Arxiv.org (“Few-Shot Learning-Based Human Activity Recognition“), they describe a few-shot learning technique — a technique to teach an AI model with a small amount of labeled training data by transferring knowledge from relevant tasks — optimized for wearable sensor-based activity recognition.
“Due to the high costs to obtain … activity data and the ubiquitous similarities between activity modes, it can be more efficient to borrow information from existing activity recognition models than to collect more data to train a new model from scratch when only a few data are available for model training,” the paper’s authors wrote. “The proposed few-shot human activity recognition method leverages a deep learning model for feature extraction and classification while knowledge transfer is performed in the manner of model parameter transfer.”
Concretely, the team devised a framework — few-shot human activity recognition (FSHAR) — comprising three steps. First, a deep learning model — specifically a long-short term memory (LSTM) network, a type of recurrent neural network that can capture long-term dependencies — that transforms low-level sensor input into high-level semantic information is trained with samples. Next, data that’s relevant or helpful to learning the target task (or tasks) is mathematically discerned and separated from that which isn’t relevant. Lastly, the parameters for the network — i.e., the variables machine-learned from historical training data — are fine-tuned before they’re transferred to a target network.
To validate their approach, the researchers performed experiments with 331 samples from two benchmark data sets: opportunity activity recognition data set (OPP), which consists of common kitchen activities from four participants with wearable sensors recorded over five different runs, and physical activity monitoring data set (PAMAP2), which comprises 12 household and exercise activities from nine participants with wearables.
Compared with the baseline, they claim that FSHAR methods “almost always” achieved the best performances.
“With the proposed framework, satisfying human activity recognition results can be achieved even when only very few training samples are available for each class,” they wrote. “Experimental results show the advantages of the framework over methods with no knowledge transfer or that only transfer knowledge of feature extractor.”