One of the biggest limiting factors of artificial intelligence (AI) systems is that they can’t think or conceptualize the world the way humans can.
Rather than intuitively discerning patterns in chaos, like how you can identify a cat in a photograph instantly, traditional AI models require in-depth descriptions of what constitutes a “cat” object and how to identify one by evaluating individual groups of pixels within the image.
Deep learning systems are starting to bypass the necessity for brute force computations, as evidenced by the landmark victory of AI program AlphaGo against an international champion of Go, a game once thought to be too intuitive and conceptual for AI to master. But a new, yet intuitively simple, leap forward in AI learning may be able to accelerate the pace of AI development even further.
Google researcher and AI expert Ian Goodfellow is working on AI that belongs to a group of “generative models,” which are designed to create images and sounds comparable to those you’d find in the real world. This is a deceptively difficult task, as AI programs must first conceptually understand what it is they’re trying to replicate, a leap forward in intuitive thinking that has historically been reserved for human beings.
Goodfellow is attempting to accomplish this using something called generative artificial networks, or GANs, which are sets of two dueling, semi-competing AI algorithms designed to continuously one-up each other. For example, one AI may be programmed to generate imagery that looks realistic, while the other AI will be programmed to distinguish real images from machine-generated ones. Over time, the image generator will get better at generating realistic images, and the “judge” will get better at discerning them.
Both AI programs utilize artificial neural networks, which are designed to mimic the process human brains use to store and recall information. Rather than strict inputs and outputs, both machines will be establishing complicated networks of interconnected ideas and using them to both “think” more conceptually and learn from their past mistakes.
Neural networks have existed for several years now, but what makes GANs especially powerful is their ability to run without any human assistance. Rather than relying on a human supervisor to guide and educate them, the twin algorithms will be perfecting themselves, on a practically infinite feedback loop.
It sounds cool in theory, but what practical uses could these dueling robots possibly have?
- Astronomical simulations. Astronomers are already starting to take advantage of GANs for the purposes of simulating galaxies and star systems whose appearance is warped from gravitational lensing. Gravitational lensing is a distortion effect caused by what’s currently identified as “dark matter,” but because we can’t always filter gravitational lensing from other environmental influences, it’s hard to learn more about dark matter from real-world images.
- Medical diagnoses and understanding. GANs could also be used to generate and better understand medical information, such as medical histories, current symptoms, and complicating factors when making a diagnosis. And because GANs operate entirely independently of humans, they won’t need to infringe on patient privacy to generate new records.
- Interpreting large data sets. GANs can be used to interpret almost any conceivable set of large data as well, modeling and interpreting it across millions of potential iterations. Eventually, they’d become better data analysts than their human counterparts — or at least, that’s the idea.
The goal: Understanding the world
Of course, the potential applications for GAN technology extend far beyond these currently available uses. The main goal here is to create AI that’s capable of understanding the world, rather than just reacting to it. We already have AI and computing power capable of automating simple and tedious tasks, but it’s the high-level thinking, creativity, and even curiosity of the human mind that we need to mimic if we’re going to start solving bigger problems.
Is it dangerous?
There are also a handful of fears related to GAN technology. The first, and more threatening, is the capacity for technology that can create realistic, artificial depictions of the world; imagine a cybercriminal being able to forge images of real people in compromising situations. Keeping the technology out of the wrong hands would be exceptionally important, but how could we judge which are the wrong hands and which are the right ones?
It’s also possible that this recursive, self-learning process could end up teaching each bot the wrong lessons about reality. Because they aren’t supervised, they could start down an erroneous path and waste time following it until the entire system becomes useless.
GANs aren’t a perfect system, but their potential applications and their ability to speed up AI development makes them more than worth exploring, at least according to Google engineers. In any case, we’ve taken another step forward toward AI that can continuously refine and improve itself without any input from humans.
Larry Alton is a freelance writer covering artificial intelligence.