Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
To describe Max Tegmark’s career as “storied” is to do the Swedish-American physicist a disservice. He’s a professor at the Massachusetts Institute of Technology, scientific director of the Foundational Questions Institute, and cofounder of the Future of Life Institute (FLI). He’s published more than 200 publications and developed data analysis tools for microwave background experiments. And he’s been elected as a Fellow of the American Physical Society for his contributions to cosmology.
He’s also one of the foremost thinkers in artificial intelligence. In 2015, Elon Musk donated $10 million to FLI to advance research into the ethical, legal, and economic effects of AI systems. Tegmark’s latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, postulates that neural networks of the future may be able to redesign their own hardware and internal structure. Later chapters explore the potential implications of “superintelligent AI,” some of which include the integration of machines and humans, altered social structures, and algorithms that watch over their creators like a benevolent king.
Thanks @elonmusk for daring to envision an awesome future of life and working tirelessly to build it! :-) https://t.co/p6J7GTA6K0
— Max Tegmark (@tegmark) August 29, 2017
Tegmark recently spoke about AI’s potential — and its dangers — at IPsoft’s Digital Workforce Summit in New York City. After the keynote address, we spoke via phone about the challenges around AI, especially as they relate to autonomous weapons and defense systems like the Pentagon’s controversial Project Maven program.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Here’s an edited transcript of the interview.
VentureBeat: You’ve written pretty extensively about where you think AI is going, and in Life 3.0, you lay out 12 different ways superintelligent AI might eventually end up fitting into our lives. I remember distinctly that one of them was a “1984-style” society and another was “AI as a conqueror.” There seems to be a lot of negativity around AI, especially from thought leaders like Peter Thiel and Elon Musk — folks who believe strongly that AI is a threat and that we should be thinking very carefully about how we implement it. So my questions for you are: What’s looking increasingly likely as far as intelligence is concerned? What kind of form do you think it’ll take? And do you think some of the fears about it are overblown?
Max Tegmark: People often ask me if I’m for or against AI, and I ask them if they think fire is a threat and if they’re for fire or against fire. Then they see how silly it is; of course you’re for fire — in favor of fire to keep your home warm — and against arson, right? The difference between fire and AI is that — they’re both technologies — it’s just that AI, and especially superintelligence, is way more powerful technology.
Technology isn’t bad and technology isn’t good; technology is an amplifier of our ability to do stuff. And the more powerful it is, the more good we can do and the more bad we can do. I’m optimistic that we can create this truly inspiring, high-tech future as long as we win the race between the growing power of the technology and the growing wisdom with which we manage it. Therein lies the rub, I think, because our old strategy for winning this race has been learning from mistakes. We discovered fire, screwed up a lot, and then invented the fire extinguisher. So with more powerful technology like nuclear weapons or superhuman AI, we don’t want to have to learn from mistakes — it’s much better to want get things right the first time.
I personally feel that people like Elon Musk and Stephen Hawking have been falsely blamed for pessimism when they’re both quite optimistic people. The reason Elon Musk talks so much about [the danger of AI] is because he thinks much more about the long-term future than your average politician, who is just thinking about the next election cycle.
You begin to realize how amazing the opportunities are with AI if you do it right, and how much of a bummer it would be if we screw it up. People sometimes tell me to shut up and not talk about AI risk because they say it’s scaremongering, but it’s not scaremongering — it’s what we at MIT call “safety engineering.”
Before NASA launched the Apollo 11 moon mission, it systematically thought through everything that could go wrong when putting people on top of explosive fuel tanks and launching them to a place where no one could help them. Was that scaremongering? No! That was precisely the safety engineering that was required to ensure the success of the mission.
That’s the strategy I’m advocating for very powerful AI: thinking about what could go wrong to make sure it goes right. To me, the real threat is complacency, where we refuse to do the safety engineering and refuse to do the planning.
VentureBeat: Yeah, I think the fire extinguisher analogy you just mentioned is apt. I’m not sure that we really have one for AI, right?
Tegmark: Well, for superintelligence, we certainly don’t have one. You wouldn’t want to have an accidental nuclear war between the U.S. and Russia tomorrow and then thousands of mushroom clouds later say, “Let’s learn from this mistake.” It’s much better to plan ahead and avoid it going wrong in the first place. As a technology evolves from rocks and sticks to nuclear, biotech, and ultimately, in the future, extremely powerful AI, you reach a threshold where it’s not OK to make mistakes.
VentureBeat: Yeah, certainly. And not to belabor the point, you’re a cofounder of the Future Life Institute, and last summer you published an open letter undersigned by a lot of AI and robotics researchers coming out against autonomous weapons. But unfortunately, it seems like we’re hearing more and more about autonomous weapons. There’s the obvious example, Project Maven, the Pentagon drone project which has caused a lot of controversy inside and outside of Google, and there’s evidence that autonomous weapons are being developed in Russia, China, and India. It’s hard not to be concerned about that, right?
Tegmark: Here is an example where I’m actually more optimistic. I have a lot of colleagues who are super gloomy about this and say, “Oh, we’re screwed. Any technology that can be used militarily will be used militarily, so give up and let the arms race begin.” But that’s actually not true. We have a fantastic counterexample with bioweapons. Look at biology today. What do you mainly associate it with: new ways of curing people or new ways of killing people? The former, of course. But it didn’t have to be this way. In the late 1960s, the U.S. and Soviet Union were stockpiling bioweapons and we could’ve ended up with a horrible future — but we didn’t.
There’s a very clear red line, beyond which people think uses of biology are disgusting and they don’t want to fund them and they don’t want to work on them. My optimistic vision is that you can do exactly the same thing with AI so that 10 years from now, there’s going to be an international agreement banning certain kinds of offensive lethal weapons, and people will think of AI as a force for good in the world, not mainly as a sort of horrible, destabilizing source of assassination tools every terrorist group uses to make life miserable.
But, as with biology, it’s not a foregone conclusion. It’s going to take a lot of hard work, and that’s exactly the controversy you’re seeing now. You’re seeing a lot of AI researchers, including the ones who signed the letter at Google, say that they want to draw lines, and there’s a very vigorous debate right now. It’s not clear yet which way it’s going to go, but I’m optimistic that we can make it go in the same good direction that biology went. This pessimistic claim that we’re always screwed and can’t prevent destabilizing military use abuse just isn’t true.
VentureBeat: You mentioned in this letter that even if international treaties are signed and countries like the U.S. and China can come to an understanding about appropriate and inappropriate uses of AI, there’s a chance that these tools will fall into the hands of less scrupulous leaders. Look at biological weapons in Syria under the Assad regime, for example. Would you say that it’s simply a risk you have to take, or is there something we can do to prevent it?
Tegmark: I think there’s a great deal we can do to prevent it. If you look at biological weapons, they’re actually pretty cheap to build, but the fact of the matter is we haven’t really had any spectacular bioweapons attacks, right? It’s not because it’s impossible, but what’s really key is the stigma by itself. There’s a huge difference between people pretending to build bioweapons and not even pretending. People think it’s disgusting, they don’t want to work on it, and good luck getting venture capital funding for bioweapons. Assad in Syria got a hold of chemical weapons, true, but the stigma was so strong, the disgust was so strong that he even voluntarily gave up a bunch of them. That’s the situation you really want to be in.
And as a result, yes, there have been people killed by chemical weapons, but much fewer people have been killed by them than have been killed by bad human drivers or medical errors in hospitals — you know, the kinds of things that AI can solve. So all in all, we’re in a much, much better situation with these chemical weapons than we would have been if we had just said, “OK, it’s a human right, everybody can have chemical weapons like they can have their own gun in America.”
Lethal autonomous weapons will be cheap, but there’s a huge difference between a once-in-a-blue-moon Unabomber type who builds their own homebrew hack and sells it to people, versus a superpower that decides to mass-produce these in the millions and billions. Once these things get mass-produced just like regular firearms, it’s just a matter of time until North Korea has them, ISIS has them, and Boko Haram has them, and they’re flooding the black markets all over the world. At that point, we’re screwed, and they become like Kalashnikovs. There’s absolutely no way you can prevent terrorists from getting hold of them, no matter how much you would want to.
But if it never gets to that mass production stage, then it will remain just a nuisance in the grand scheme of things, so that AI as a whole will be viewed as a positive thing. I really do think this is very, very feasible, and really something that can happen. But it has to happen soon before these things become super cheap, because arms races, once they start, they sort of gain momentum on their own.
What AI experts are basically saying is there should be a line drawn somewhere. But on the other hand, there are plenty of uses of AI in the military which some people think are very good. Take Israel’s Iron Dome missile defense system, for example. It’s not clear exactly where to draw the line yet, but that’s the purpose of negotiations. Step one is to get all the experts in the room, with the diplomats and the politicians, and figure out where to draw the line. A lot of companies and governments have started supporting the idea. Even China said they support some kind of ban on lethal autonomous weapons, and the Pentagon has a policy which is actually quite nuanced.
I think actually that the superpowers realize that they are currently top dog and that it’s not in their interest for a new technology to come along which is so cheap that all of the local terrorist groups and rogue state enemies come and get it. America does not want ISIS to get cheap autonomous weapons just as much as Putin doesn’t want the Chechen separatists to have it. They’d much prefer the status quo.
VentureBeat: So switching gears for a moment: There’s been a lot of talk about bias in AI lately. Do you think it’s as big a problem as it’s being made out to be, or do you think it’s not so much an AI problem, but a data problem? Some researchers argue that we just don’t have the right corpus — that we’re training algorithms on limited data sets and that it’s skewing the results.
Tegmark: The way I see it, it’s not enough to have a technology.” You have to use it wisely.
For example, if the technology is AI that’s deciding who gets probation and who does not, the wisdom is understanding what kind of data you have and knowing enough about the inner workings of your system. Look at the Northpointe system, which was widely deployed in the U.S. It’s a black box where the users generally had no idea how it works. They were horrified to learn that the system was racially biased. This is another example of why we should be proactive and think ahead of time of what could go wrong.
It’s not like that came as a surprise to anyone who had thought hard about these issues, but it was a surprise to a lot of people in the courtroom. That was a very preventable problem.
VentureBeat: I’m gonna end with one last big-picture question here. We’ve talked about all of the different countries investing a lot of money in AI and expressing an interest in fostering “AI innovation centers” — that’s certainly a phrase that’s been bandied about a lot by China. And the Modi administration in India has been talking a lot about not just defense systems and weaponry, but how it wants to create the next Silicon Valley of AI, in a sense. As someone who follows this very closely, do you think there’s a particular country that is making inroads? And if you could project a little bit, what do you think the landscape will look like 20 years or even 50 years down the line?
Tegmark: Well, to me, one of the most interesting questions isn’t what it will look like, but what we should do to make it look good. It’s clear that the U.S. and China are the two AI superpowers, although the U.K. has some interesting stuff going on with Google DeepMind …
VentureBeat: And Canada, right?
Tegmark: Canada has some really awesome stuff. But if you compare China, Canada, and others, the U.S. is the country that is investing the least per capita in basic computer science education, whereas China, for example, is investing very heavily. You get a lot of publicity for investments in the military, but it’s really important to invest in basic education. You know why the U.S. is a leader in tech today? Why is Silicon Valley in California and not in Belgium? It’s because in the ’60s, during the space race, the government invested very heavily in basic STEM education, which created a whole generation of very motivated young Americans who created technology companies.
The whole business with Alpha Go in China was that country’s ‘Sputnik moment’; when they realized that they have to invest heavily in basic AI education and research. Here in America, I think we’re very complacent and we’re resting on our laurels. If you look at what the U.S. government gives to the National Science Foundation, it’s very lackadaisical compared to what the Chinese are investing. We’re a little bit asleep at the wheel, and if we don’t invest in education, we’re basically forfeiting the race.
You can’t just give a bunch of contracts to Boeing or something. You need to create this workforce of talented people. We live in a country right now where we still don’t even have a national science advisor appointed over a year into the presidency. The kind of leadership we can provide in the future with U.S. technology depends on how much we can invest in our students today. There’s been a lot of economic studies that show the best investment you can make in your economic future education and basic research.
There’s an amazing opportunity to help humanity flourish like never before if we get it right with AI, and that’s why it’s really worth fighting for getting it right. All of today’s greatest problems can be solved with better technology, ultimately. And AI is a crucial part of that.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.