Check out all the on-demand sessions from the Intelligent Security Summit here.
Who really has control over the bots? In a recent interview, both Bill Gates (in a bit of a confrontational mood) and Microsoft chief executive Satya Nadella (who seemed more even-keeled than ever) talked about maintaining control over technology, about how humans will build bots and use AI that is beneficial to humanity, and that the imminent threat of AI is overplayed — it won’t happen anytime soon, if at all.
In the interview, Gates specifically mentions Musk:
“The so-called control problem that Elon [Musk] is worried about isn’t something that people should feel is imminent. This is a case where Elon and I disagree. We shouldn’t panic about it. Nor should we blithely ignore the fact that eventually that problem could emerge,” said Gates.
Nadella took the bait as well.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
In his estimation, there’s a strong indication that AI will expand, but only as humans cause it to expand and maintain control.
“The core AI principle that guides us at this stage is: How do we bet on humans and enhance their capability? There are still a lot of design decisions that get made, even in a self-learning system, that humans can be accountable for. So we can make sure there’s no bias or bad data in that system. There’s a lot I think we can do to shape our own future instead of thinking, ‘This is just going to happen to us.’ Control is a choice. We should try to keep that control,” he said.
Musk has become known for his Twitter posts about AI being more of a danger than North Korea dropping a nuclear bomb, among many other moments of angst. He has almost gone so far as to say that the machines will subdue us, although his real goals seem to be related to setting guidelines.
As you may already know, I happen to agree with the camp that says AI will do mostly what we program it to do.
This is what many AI engineers and robotics experts say — you program bots to handle an order for flowers or candy, and they don’t jump over that coding wall and start messing with your home heating system or send your car into a lake. The code behaves because it can’t do anything except what a human asked the bot to do. It’s a moral imperative governed by humans. The Terminator is a Hollywood invention.
I’m also in the camp that says we need to keep an eye on things, as Gates also mentioned. Bots in our cars and in our homes can carry out commands that are highly complex and interconnected. I’m about to test a garage door opener in my house that can open the garage door by voice but then trigger the lights, the heating system, and maybe even the oven in the kitchen as well. As humans, we’re “programmed” to think about one thing at a time, and I’m thankful for that. We have what’s called sustained attention, which lets us block out distractions.
Bots can think about 100 different things at once. It’s not a big programming challenge to ask that garage door to activate other connected home appliances and other gear in my house. The “let’s keep an eye on this” problem comes into play when we have, say, a bot that triggers a thousand different things all at once — and we forget which devices are even connected and what they do.
So AI guidelines would help. Staying vigilant is a good idea. Musk is somewhat correct. The Microsoft luminaries are also somewhat correct.
It’s “both, and” here. AI will not kill you … yet.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.