VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

“I’m sorry, Dave. I’m afraid I can’t do that.”

This famous quote from “2001: A Space Odyssey” helped make the idea of a sentient computer a real thing in the minds of people other than computer scientists and science fiction writers. Of course, we’d probably all prefer a more benevolent version along the lines of Star Trek’s LCARS computer. Though this concept has always been the domain of movies and books, we actually aren’t that far away from truly smart computers. Or at least we shouldn’t be.

Building a system that’s reactive to requests and the environment around it — or intelligent agents, as they’re called in the artificial-intelligence community — isn’t that far off. Consumers and businesses already have crude versions of this with Apple’s Siri, Microsoft’s Cortana, and IBM’s Watson. But as cool as those things are, they’re pretty much just databases and search tools attached to voice recognition software. Truly intelligent agents — intelligent personal assistants — need to be able to handle deduction, reasoning, and conversing. Imagine trying to have the following conversation with Siri:

“I’m thirsty.”

“OK, do you want a hot or cold drink?”


“OK, sweet or savory?”


“OK, carbonated or not?”


“How about Gatorade?”

Ask Siri that (I just did) and you get the location of a local coffee shop. Useful, but not what you wanted. Consumers don’t really trust that the software truly understood the meaning of my need. Building a real intelligent agent which could understand multiple, complex queries — even for the simple task of being a personal assistant — is an enormous undertaking. It requires mapping speech patterns and vocabulary for every applicable language, then writing algorithms that account for hundreds of thousands (or millions) of possible actions or requests. And perhaps there are general-purpose solutions for large swaths of these actions, but these have eluded researchers for decades. Either way, it’s a huge job.

Naturally, this seems like a task for large companies like Google, IBM, Microsoft, and Apple as they’re seemingly the only organizations with the resources and the collective talent to tackle a problem that big. After all, Google previously took on mapping the earth, Microsoft popularized the operating system, and Apple completely revolutionized human-computer interaction. All really big problems, all really big products. And that’s the rub.

Computing’s really hairy, complex problems of tomorrow won’t be solved in Mountain View, Redmond or Cupertino — at least not at those three companies. Big companies have a responsibility to make products, not solve problems, especially incredibly complex ones. This isn’t a bad thing, and doesn’t mean those companies don’t or can’t innovate, because they undoubtedly do. A focus on building products, however, means that there isn’t time or the will to tackle things that take years, or decades to build. And that’s fine. You’d also be hard up to find many startups eschewing viable products in favor of tackling big products.

While some companies invest time and money in these long-term “moonshot” problems, it’s not clear that it’s possible to scale the human complexity required to build truly useful intelligent agents (an achievement I casually define beyond current brittle weak artificial intelligence, but far before strong artificial intelligence) in a single organization. They have too many other commitments beyond solving problems, and too many other interested parties influencing how they run their businesses — shareholders, their bosses, the board, the media and even the public at large.

The huge advantage we have today in tackling these massively complex problems that we didn’t have even five years ago is the sheer amount of resources available to everyone on the planet, and the interconnectivity of the information created from those resources. Almost anyone who wants it can cheaply and quickly have access to incredibly computing power through things like open-source databases and applications, Amazon Web Services, and cheap storage. They can also openly and freely publish their work online for anyone to use on blogs, social media, or GitHub.

In their 2014 book “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,” authors Erik Brynjolfsson and Andrew McAfee argued that these resources and ability to quickly share knowledge will drive a new period of innovation: “The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.”

This type of collective effort will begin to define innovation and allow humankind will take our next massive technological leaps forward. It won’t come from an innovation lab inside a company whose energy and focus is on building products and satisfying shareholders, but rather the collective work of a bunch of companies and individuals. A global brain, rather than a Google brain.

Phillip Nelson is the director of product management for Quixey. He’s working on figuring out deep search. He’s also the co-founder of and ShopTap.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.