Sleep apnea, a disorder that occurs when a person’s breathing is interrupted during sleep, affects an estimated 22 million Americans. The trouble is, the bulk of cases — 80% — go undiagnosed, and if left untreated, sleep apnea can increase the risk of coronary artery disease, heart attack, heart failure, and stroke.

One field of study — automatic snore sound classification, or ASSC — aims to develop a method for sleep apnea diagnosis based on snore sound (sleep apnea is characterized by repetitive episodes of decreased or completely halted airflow). But despite progress that’s been made in recent years, there remains a lack of labeled data on which ASSC systems can be trained.

That’s why researchers with the Imperial College London, University of Augsburg, and Technical University of Munich sought in a new paper (“Snore-GANs: Improving Automatic Snore Sound Classification with Synthesized Data“) to develop generative adversarial networks (GANs) that create synthesized data to fill in gaps in real data. (For the uninitiated, GANs are two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples.) The augmented data set was then used to train an ASSC.

“When conducting data augmentation, we aggregate the data from all [GANs], and randomly select data from the pool which are further merged into the original training set. By doing this, it is expected to expand the diversity of the augmented data that come from separate [GANs],” the paper’s authors explained.

To validate their method, they used a publicly available data set — the Munich-Passau Snore Sound Corpus (MPSSC) — to classify the vibration location within the upper airways when snoring, starting with existing recordings of examinations from three medical centers in Germany taken during clinical examinations between 2006 and 2015. Selected snore events were classified by medical ear, nose, and throat experts based on findings from accompanying video recordings, and the annotated samples were separated into subject-independent training, development, and test sets.

So how’d the approach fare? The paper’s authors report that they were successful in generating data that shares a distribution from the original data, resulting in an increased quantity of training data without the need for human annotation. Furthermore, they say, the combination of synthesized and original data improved the classifer’s performance.

“In future, we will keep collecting more snore sound data from different hospitals and patients to increase the data size and diversity, on which we will re-evaluate the proposed methods,” the researchers said. “Besides, more advanced … systems will be further proposed and evaluated in our following work to improve the acoustic sequence generation models.”