Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
MIT scientists have developed a deep learning system, Air-Guardian, designed to work in tandem with airplane pilots to enhance flight safety. This artificial intelligence (AI) copilot can detect when a human pilot overlooks a critical situation and intervene to prevent potential incidents.
The backbone of Air-Guardian is a novel deep learning system known as Liquid Neural Networks (LNN), developed by the MIT Computer Science and Artificial Intelligence Lab (CSAIL). LNNs have already demonstrated their effectiveness in various fields. Their potential impact is significant, particularly in areas that require compute-efficient and explainable AI systems, where they might be a viable alternative to current popular deep learning models.
Air-Guardian employs a unique method to enhance flight safety. It monitors both the human pilot’s attention and the AI’s focus, identifying instances where the two do not align. If the human pilot overlooks a critical aspect, the AI system steps in and takes control of that particular flight element.
This human-in-the-loop system is designed to maintain the pilot’s control while allowing the AI to fill in gaps. “The idea is to design systems that can collaborate with humans. In cases when humans face challenges in order to take control of something, the AI can help. And for things that humans are good at, the humans can keep doing it,” said Ramin Hasani, AI scientist at MIT CSAIL and co-author of the Air-Guardian paper.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
For instance, when an airplane is flying close to the ground, the gravitational force can be unpredictable, potentially causing the pilot to lose consciousness. In such scenarios, Air-Guardian can take over to prevent incidents. In other situations, the human pilot might be overwhelmed with excessive information displayed on the screens. Here, the AI can sift through the data, highlighting critical information that the pilot might have missed.
Air-Guardian uses eye-tracking technology to monitor human attention, while heatmaps are used to indicate where the AI system’s attention is directed. When a divergence between the two is detected, Air-Guardian evaluates whether the AI has identified an issue that requires immediate attention.
AI for safety-critical systems
Air-Guardian, like many other control systems, is built upon a deep reinforcement learning model. This model involves an AI agent, powered by a neural network, that takes actions based on environmental observations. The agent is rewarded for each correct action, enabling the neural network to gradually learn a policy that guides it to make the right decisions in given situations.
What sets Air-Guardian apart, however, is the LNN at its core. LNNs are known for their explainability, a feature that allows engineers to delve into the model’s decision-making process. This stands in stark contrast to traditional deep learning systems, often referred to as “black boxes” due to their inscrutable nature.
“For safety-critical applications, you can’t use normal black boxes because you need to understand the system before you can use it. You want to have a degree of explainability for your system,” Hasani said.
Hasani was part of a team that began research on LNNs in 2020. In 2022, their work on an efficient drone control system, based on LNNs, was featured on the cover of Science Robotics. Now, they are taking strides to bring this technology into practical applications.
Another significant attribute of LNNs is their ability to learn causal relationships within their data. Traditional neural networks often learn incorrect or superficial correlations in their data, leading to unexpected errors when deployed in real-world settings. LNNs, on the other hand, can interact with their data to test counterfactual scenarios and learn cause-and-effect relationships, making them more robust in real-world settings.
“If you want to learn the true objective of the task, you cannot just learn the statistical features from the vision input that you’re getting. You have to learn cause and effect,” Hasani said.
AI for the edge
Liquid Neural Networks offer another significant advantage: their compactness. Unlike traditional deep learning networks, LNNs can learn complex tasks using far fewer computational units or “neurons.” This compactness allows them to operate on computers with limited processing power and memory.
“Today, in AI systems, we see that as we scale them up, they become more and more powerful and can do like many more tasks. But one of the problems is that you cannot deploy them on an edge device,” Hasani said.
In a previous study, the MIT CSAIL team demonstrated that an LNN with just 19 neurons could learn a task that would typically require 100,000 neurons in a classic deep neural network. This compactness is particularly crucial for edge computing applications, such as self-driving cars, drones, robots and aviation. In these scenarios, the AI system must make real-time decisions and cannot rely on cloud-based models.
“The compactness of liquid neural networks is definitely helpful because you don’t have an infinite amount of compute on these cars or airplanes and edge devices,” Hasani said.
Broader applications of Air-Guardian and LNNs
Hasani believes that the insights gained from the development of Air-Guardian can be applied to a multitude of scenarios where AI assistants must collaborate with humans. This could be simple scenarios, such as accomplishing tasks across several applications or complex tasks like automated surgery and autonomous driving where human and AI interaction is constant.
“You can generalize these applications across many disciplines,” Hasani said.
LNNs could also contribute to the burgeoning trend of autonomous agents, a field that has seen significant growth with the rise of large language models. LNNs could power AI agents such as virtual CEOs, capable of making and explaining decisions to their human counterparts, aligning their values and agendas with those of humans.
“Liquid neural networks are universal signal processing systems. It doesn’t matter what kind of input data you’re serving, whether it’s video, audio, text, financial time series, medical time series, user behavior,” Hasani said. “Anything that has some notion of sequentiality can go inside the liquid neural network and the universal signal processing system can create different models. The applications can range from predictive modeling to time series to autonomy to generative AI applications.”
Hasani likens the current state of LNNs to the year 2016, just before the influential “transformer” paper was published. Transformers, built on years of prior research, eventually became the backbone of large language models like ChatGPT. Today, we are at the dawn of what can be achieved with LNNs, which could potentially bring powerful AI systems to edge devices such as smartphones and personal computers.
“This is a new foundation model,” Hasani asserts. “A new wave of AI systems can be built on top of it.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.