Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
I usually cringe whenever Stephen Hawking’s name comes up in a conversation about AI. If the world was divided into critics and believers, Hawking would certainly fall into the AI critic column, but slotting him there ignores a great deal of nuance to his position on artificial intelligence. With his death earlier this week, I fear people will only remember which side he was on and miss his thoughtful perspectives on what specific dangers could lie ahead.
To be clear, Hawking was no great fan of general artificial intelligence. He repeatedly said that a superintelligent AI could spell the end of humanity. His argument was fairly straightforward: A superintelligence would be able to pursue its goals incredibly competently, and if those goals weren’t aligned with humanity’s, we’d get run over.
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants,” he wrote in a 2015 question-and-answer session on Reddit. “Let’s not place humanity in the position of those ants.”
In his public remarks, Hawking also warned about AI being used as a tool of oppression — empowering the few against the many and deepening already existing inequality. And yes, he did warn about AI-based weapons systems.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
His arguments didn’t seem to stem from a belief in malicious AI systems, but rather radically indifferent ones that won’t wield their power beneficially.
But at the same time, he was optimistic that AI could be the best thing to happen to humanity, if built correctly to benefit us. His advocacy came from a belief that it was possible to develop a set of best practices that would lead to the creation of beneficial AI.
There’s one other important component to Hawking’s AI criticism: a healthy skepticism for those who predict the arrival of superintelligence in a particular time frame.
“There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime,” he wrote.
When superintelligent AI does arrive, here’s hoping we actually remember what he had to say.
For AI coverage, send news tips to Blair Hanley Frank and Khari Johnson, and guest post submissions to Cosette Jarrett — and be sure to bookmark our AI Channel.
Thanks for reading,
Blair Hanley Frank
AI Staff Writer
P.S. Please enjoy this video of Stephen Hawking’s appearance at Web Summit last year:
https://vimeo.com/244197491
From the AI Channel
Investors share their predictions for AI and machine learning in 2018
Over the past three years, building intelligent apps — apps powered by machine learning that continuously improve as they ingest new data — has become easier and easier. Given the continued rise of machine learning, where are venture capitalists looking for the next set of investment opportunities? Generally, we see the core machine learning tools and […]
Stephen Hawking’s TV show is now free to watch online
Stephen Hawking’s Favorite Places, a three-part TV series that debuted in January, has been made available to watch online for free in honor of Hawking’s life, according to tech and science video streaming site CuriosityStream. The renowned astrophysicist passed away today in Cambridge, England. He was 76. The final episode, which had still not been released, was also published […]
Apple’s Siri reportedly hurt by flawed code, scalability, and management
A new report today from The Information (paywalled, via 9to5Mac) discusses the troubled seven-year history of Apple’s digital assistant Siri, describing it as “a problem” for Apple, and blaming it for the underwhelming performance of Apple’s smart speaker HomePod. Contrasting somewhat with a Siri cofounder’s recent claim that Apple expected Siri to be too versatile, […]
Google Cloud chief scientist: ‘AI doesn’t belong to just a few tech giants in Silicon Valley’
Silicon Valley may be behind much of the development of AI in the modern world, but it’s vital that everyone feel included in the technology, said Fei-Fei Li, Google Cloud chief scientist for AI. “It’s time to bring AI together with social science, with humanities, to really study the profound impact of AI to our […]
Fetch Robotics CEO: ‘Douchebaggery’ is ruining engineering
Debate surrounding the lack of diversity in tech has led people to point fingers in a lot of directions. Some say the educational pipeline is the culprit. Others cite racism, sexism, or deeply embedded negative cultural practices. Melonee Wise, CEO of warehouse robot company Fetch Robotics, says a big part of the problem is that […]
Google Empathy Lab founder: AI will upend storytelling and human-machine interaction
In an echo of studies that say more than half of U.S. households will have a smart speaker in the coming years, the 2018 Tech Trends report released today at SXSW predicts that by 2021 more than half of all computing in developed nations will be performed with voice. To deal with this massive shift […]
Beyond VB
AI has a hallucination problem that’s proving tough to fix
Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there. (via Wired)
YouTube, the great radicalizer
At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations. (via The New York Times)
When an AI finally kills someone, who will be responsible?
Here’s a curious question: Imagine it is the year 2023 and self-driving cars are finally navigating our city streets. For the first time one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply? (via MIT Tech Review)
Burger-flipping robot taken offline after one day
The robot was installed at a Cali Burger outlet in Pasadena and replaced human cooks. But after just one day at work the robot has been taken offline so it can be upgraded to work faster. Its human helpers are also getting extra training to help the robot keep up with demand. (via BBC News)
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.