A new GamesBeat event is around the corner! Learn more about what comes next. 


For decades, amateur two-way radio operators have communicated across entire continents by choosing the right radio frequency at the right time of day, a luxury made possible by having relatively few users and devices sharing the airwaves. But as cellular radios multiply in both phones and Internet of Things devices, finding interference-free frequencies is becoming more difficult, so researchers are planning to use deep learning to create cognitive radios that instantly adjust their radio frequencies to achieve optimal performance.

As explained by researchers with Northeastern University’s Institute for the Wireless Internet of Things, the increasing varieties and densities of cellular IoT devices are creating new challenges for wireless network optimization; a given swath of radio frequencies may be shared by a hundred small radios designed to operate in the same general area, each with individual signaling characteristics and variations in adjusting to changed conditions. The sheer number of devices reduces the efficacy of fixed mathematical models when predicting what spectrum fragments may be free at a given split second.

That’s where deep learning comes in. The researchers hope to use machine learning techniques embedded within the wireless devices’ hardware to improve frequency utilization, such that the devices can develop AI-optimized spectrum usage strategies by themselves. Early studies suggest that deep learning models average 20% higher classification accuracy than traditional systems when dealing with noisy radio channels, and will be able to scale to hundreds of simultaneous devices, rather than dozens. Moreover, the deep learning architecture developed for this purpose will be usable for multiple other tasks, as well.

One key challenge in implementing deep learning for this application is the massive amount of data that will need to be processed rapidly to do continuous analysis. Deep learning can rely on tens of millions of parameters, and here might require measurements of over a hundred megabytes per second of data on a millisecond level. This is beyond the capability of “even the most powerful embedded devices currently available,” the researchers note, and low latency demands that the results not be processed in the cloud.

So the goal will be to help shrink deep learning models to the point where they can run on small devices, and use complex testing facilities — “wireless data factories” — to improve the software as hardware improves, including raising its resilience against adversarial attacks. The researchers expect to use the learning in both 5G millimeter wave and future 6G terahertz hardware, which are expected to become even more ubiquitous than 4G devices over the next two decades, despite their ultra high-frequency signals’ susceptibility to physical interference.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member