Back and forth he went: Is this car real or that car real?

Near the start of his keynote address at the Nvidia GPU Tech Conference (GTC), CEO Jensen Huang asked the audience to guess which scenes from a BMW car commercial were generated by a machine and which were recorded with a camera.

It was unclear during the demonstration of AI-powered real-time ray tracing what percentage of the audience in the Event Center at San Jose State University was fooled and who got it right, but it was a telling moment that demonstrated how distortion of reality is central to Nvidia’s business strategy and is shaping the future of artificial intelligence.

Nvidia is a company like no other, manipulating the human mind’s ability to recognize reality in movies, video game environments, graphics, and even human faces. It also speeds the training of AI systems today, supplied part of the compute power that led to the modern re-emergence of machine learning, and powers some of the fastest supercomputers on the planet.

Subjecting audiences to A/B tests and asking them what’s real should seem familiar to folks who’ve followed developments since Nvidia open-sourced StyleGAN and people began to create fake cats, human faces, and even Airbnb listing photos.

This blurring of reality, hastened by progress toward more lifelike graphics, is what Nvidia VP of applied deep learning research Bryan Catanzaro said is his dream, though he acknowledged it can be misused. Catanzaro spoke to reporters Monday to share GauGAN, a new AI system that creates lifelike landscape imagery from a simple sketch.

Of course, AI trained to look like an Airbnb listing or human being can have negative outcomes, especially when you consider the fake Airbnb listing photos were made in a few hours by a person with no formal training to create machine learning models, Christopher Schmidt.

But style transfer has positive applications beyond Prisma photo filters or deepfake GIF startup Morphin.

Take a close look at strategies adopted by two elite enterprise AI companies: Yoshua Bengio’s Element AI and Andrew Ng’s Landing AI. Both are focused on few-shot learning and transfer learning as a way to create synthetic data.

It’s a subject Element AI CEO Jean-François Gagné discussed with VentureBeat ahead of the release of the company’s first products this week.

“We hear a lot about fake news, which is like the downside, but there’s a humongous value in fake data. The ability to create high fidelity events and simulate them with lots of context is strengthening the ability to use advanced systems in a very small data environment,” Gagné said.

Generative adversarial networks, transfer learning, and techniques to train AI systems with synthetic data are being used for Nvidia’s Safety Force Field for helping autonomous vehicles to avoid crashes as well as in Nvidia research to improve human-robot interaction.

We don’t yet know the consequences of AI systems made to make us question the reality of Airbnb listings or online content, but there’s more to the story than malicious manipulation.

The drive to create realistic digital renderings and simulations has led Nvidia to create not only GauGAN and StyleGAN but also GPUs that power modern AI, both in datacenters and on the edge with devices like the new Jetson Nano.

Like Huang’s onstage A/B test earlier this week demonstrates, whether a system that generates fake faces can successfully convince everyone isn’t irrelevant. If you’re paying close attention, you can find some imperfections, but the evolution will continue, as this slide from Google AI’s Ian Goodfellow demonstrates.

What matters is that AI is increasingly capable of making humans question reality, and doing so may have clear positive outcomes for business and less clear, not-so-positive outcomes for society.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer