Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Greg Brockman, cofounder of nonprofit AI research organization OpenAI, had an interest in artificial intelligence from a young age, but he didn’t come to work in the field right away. Brockman studied computer science at Harvard before transferring to MIT, where he dropped out to launch online payments platform Stripe. As a founding engineer, Brockman helped scale the business from four people to 250. But he had his heart set on another field: artificial general intelligence, or systems that can perform any intellectual task that a human can.
Brockman left Stripe to pursue a career in AI, building a knowledge base from the ground up. He reconnected with researchers he’d become friends with in college, read books about the fundamentals of machine learning, and reached out to Y Combinator president Sam Altman. Fortuitously, Altman had been thinking about starting an AI lab.
That lab grew into OpenAI, which counts Altman, Elon Musk, LinkedIn executive chair Reid Hoffman, Peter Thiel, and other titans of industry as its backers. Its stated goal is to “build safe human-level AI” and to advance the field of AI with groundbreaking research in robotics, games, and dataset generation.
Ahead of a Capitol Hill hearing on artificial general intelligence this week (which will be streamed here at 10:30 a.m. Eastern on June 26), Brockman spoke with VentureBeat about recent advances in deep learning, the need for discussion and debate about AI, and ways researchers and policymakers might solve the “AI bias problem.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Here’s an edited transcript of our interview.
VentureBeat: There was an interesting article in the New York Times recently quoting some researchers who believe we’ll have to move past deep learning if we want to achieve human-level intelligence. How far away do you think we are from it, and do you think it can be done with existing methods?
Greg Brockman: I think that the framing of that question isn’t quite right.
There are three things that power all AI systems: data, compute, and algorithms. We live in a world where it’s all about these very rare, precious labeled datasets, and now we’re starting to see models like the one that [OpenAI] released for language that is able to consume unlabeled data. They read thousands of books and don’t require you to go and very carefully apply these labels, and in the end, give you a system that can read the internet and these massive corpora.
The second piece we’re doing on the data front is something like [the Dota 2-playing AI] OpenAI Five. It plays 180 years’ worth of games against itself every single day, which is an example of using compute to generate data.
On the compute front, in the past couple of months, we published a post looking at how much computing power had been thrown at deep learning models over the past six years. You can see that the largest AI training runs have been doubling every 3.5 months at a 300,000 times increase. To put that into perspective, it’s kind of like if in six years, smartphone batteries went from lasting one day to lasting 800 years.
That brings us to the algorithms. Everything we’ve made so far has involved simple ideas, but are these the right simple ideas? Are they going to go further? The truth is that we haven’t hit them all yet, so we don’t know — we don’t know whether they’ll break down … We’re in a bit of a fog, and we’re going to be in that fog until the landscape stops changing so rapidly.
VentureBeat: The idea of artificial general intelligence — so-called superintelligent AI — is concerning to some people. Part of what you’ll be testifying in front of Congress about are the potential dangers around it and how it might be used, and the importance of developing policies before we get to that point. So what’s necessary to establish, exactly, before we have intelligent systems that are unpredictable? What framework do we need?
Brockman: I think that’s the crux of it, right? This technology that we’re talking about has a potential to be the most positive thing we’ve ever created, and it’s something that [OpenAI] is thinking about very seriously.
I think it all boils down to one core idea: Artificial general intelligence has the potential to cause extremely rapid change. And when you have rapid change, it’s hard for the policy machinery and social norms — how people relate and fit into the system — to keep up.
The most important thing for governments to be doing at the policy level is developing ways to measure it. The future is hard to predict partly because the present is so poorly understood. We’ve actually been working on a number of different initiatives at OpenAI — such as the AI Index — that are key to making decisions, but there are so many questions left to answer.
VentureBeat: What about specific examples? I want to get your thoughts on the ways in which AI should be restricted. I’m talking specifically about AI used in warfare and other controversial fields. Do you think we should be concerned about going down that road?
Brockman: With any technology this transformative, it’s incredibly important to be having these conversations. There isn’t an easy answer, and I don’t believe that I alone, or OpenAI alone, are the right parties to answer.
We spent quite a lot of effort on Open AI’s charter. It’s two years’ worth of policy research aimed at figuring out what we think is safe and responsible AI development, and what it means. These are all issues that we’re engaging, but the most important thing to me is that there are a lot of stakeholders in this conversation. It’s really important for this to be a public debate that people are having and talking about.
So you know, I think if you look at specific applications, we have regulatory bodies that are well-positioned to help ensure these technologies are useful and positive. In some cases, we’re well set up, and in other cases, not so much.
VentureBeat: There have been a lot of questions about AI ethics in the news, lately. The obvious examples are [the controversial drone program] Project Maven and Amazon’s [facial recognition platform] Rekognition. Do you think it’s incumbent on companies to develop policies regarding the use of AI or it’s something they have to do jointly with the government?
Brockman: Some of them should be case by case … With a technology [like general artificial intelligence], it’s clear to me that it’s going to affect everyone, right? And if it affects everyone, you need to have a public conversation. One of the core things is that AI is going to create so much benefit — it shouldn’t be locked up in any one entity. It should be something we share in and widely distribute.
That’s one of the reasons that I think it’s so important for us to be having these Congressional hearings and for us to be having the public conversation, because right now, it’s hard to separate out the signal from the noise — even for people inside the field.
We need to hone in on what’s happening, what’s driving progress, and what’s going to happen in the next few years. It’s an extremely hard problem — we have some real challenges ahead of us — but I think that by having real conversations informed by the right data, we’ll have a shot at getting it right.
VentureBeat: One last question for you. Bias in AI is an often-discussed problem, and artificial general intelligence could make the “black box” problem even worse. Are there ways we should be thinking about combatting this before we end up with systems that are completely impenetrable?
Brockman: I think that there’s a continuum of problems here, from the ones that we see with today’s systems to the ones you can start to see on the horizon with tomorrow’s systems … Even if the data is good, if these systems are set up in the wrong way — if they’re given the wrong goals, for example — they can end up behaving in surprising ways. So how do you make sure that AI systems do what you intend and operate without bias? These are all open questions that are extremely important.
Building this field of AI safety is incredibly important. One thing that’s nice is that companies have extreme incentives to get that right … But more powerful systems that take actions on our behalf are less well-explored because we can’t build those systems yet. We’re getting close — we’re starting to see it with things like OpenAI Five — but we haven’t done it, and I think it’s really important for us to get out ahead of the problem and not play catch-up.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.