Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


OpenAI conducts an enormous amount of research in AI subfields from computer vision to natural language processing (NLP). The San Francisco-based firm — which was cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, and others with a $1 billion in backing from luminaries like LinkedIn cofounder Reid Hoffman and Sam Altman — last year detailed an AI robotics system capable of human-like dexterity. The capped-profit company’s Dota 2 bot recently defeated 99.4% of players in public matches and a team of professional players twice, and its most sophisticated NLP model can generate convincingly humanlike short stories and Amazon reviews from whole cloth.

Unsurprisingly, there’s been a lot of learnings in the roughly three and a half years since OpenAI’s inception. At VentureBeat’s Transform 2019 conference, Brockman and Sutskever touched on advances in hardware and transparency with respect to AI, and on the topic of responsible disclosure.

Brockman said the uptick in raw compute has been the single most important driver of AI advances in the past seven years. “The amount of compute that’s been going into these models has increased by a factor of 10 each year since 2012,” he said. “It’s a little like if your cell phone battery, which today lasts for a day, five years later lasts for 800 years and another five years later lasts for 100 million years.”

That’s obviously exciting, added Brockman, but fraught with peril. He’s not the only one who thinks so. Last September, members of Congress sent a letter to National Intelligence director Dan Coats requesting a report from intelligence agencies about the potential impact on democracy and national security of deepfakes, or videos created using AI that digitally grafts faces onto other people’s bodies. And during a congressional hearing in late 2018, members of Congress speaking with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey also expressed concerns about the potential impact of manipulative deepfake videos.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

“You have to ask, what are the risks? How can this go wrong?” said Brockman. “How can we make sure that we’re applying the right ethics and building this technology in the right way?”

That’s why OpenAI decided last year not to release the corpus used to train the aforementioned NLP model, nor three of the four language models or the training code. At the time, the company said it believed making available the unfettered toolset might open the door to abusive behavior by bad actors.

According to Brockman, one goal of the decision was to stimulate discussion in the AI community about responsible publication. He believes the worst-case scenario is the release of a model whose catastrophic effects force a reactionary response.

“We created an AI system with capabilities that really surprised us, and it was hard for us to assess what it’d be used for and what the limits of it should be,” said Brockman. “The argument that really tipped it for us … is that as this technology progresses, we’re going to have models with dual-use applications. They can have amazing [uses] and do great things, but only if they’re used in the right way.”

He added: “You really need to have a dry run. That’s why it was really important to us that we [developed] a norm for how you can not share.”

Explainability is perhaps the key. AI systems that can justify their predictions — systems which OpenAI is actively developing — could help peel back the curtains on particularly opaque architectures, said Brockman. In March, OpenAI and Google open-sourced a technique that lays bare the component interactions within image-classifying neural networks. They call the visualization an activation atlas, and they say it’s intended to illustrate how those interactions shape the model’s decision-making.

“Explainability in neural networks is an incredibly important question, because as neural networks become smarter, it would become preferable to understand why they make particular predictions,” said Sutskever. “What I would expect to see longer-term, as this work advances, is that we apply [explainability] tools to language model and models in other domains, and when we can use the model’s language abilities to explain to use its decisions. That will be very useful.”

Commonsense reasoning might be another piece of the puzzle. Brockman and Sutskever are leading a new team at OpenAI aptly dubbed the Reasoning Team, with the aim of imbuing machine learning models with the ability to reason through and solve tasks they can’t today.

“There’s this myth that neural nets are a black box,” said Brockman. “The hope is that we can understand why it’s making the decisions that it is, and ensure that it’s actually going to do what we think it’s going to do, and we can then ensure that it’s actually used to benefit people.”

Ultimately, it’ll require industry collaboration. To this end, OpenAI today published a blog post and accompanying paper outlining strategies — like increasing transparency about investments and committing to higher standards — that can be used to improve the likelihood of long-term industry cooperation on safety norms in AI. The coauthors posit that this sort of collaboration will be “instrumental” in ensuring AI systems are both safe and beneficial, particularly in competitive environments that could cause companies to under-invest in safety.

“I view what [we’re doing] as a first step toward informing community norms, and my hope is that they’re now starting to fall into place,” said Brockman.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.