Hadoop founder Doug Cutting spoke yesterday at Cloud Factory in Banff, Alberta, with his perspective on what the future will hold for big data.
“I’ve developed a trick for doing this,” he said, referring to his prognostication capabilities.
“Within facts about the present, there are clues about the future.”
- “In the future we’ll be able to store and process more data than we can now.”
- “The enterprises that will do best are those that will best leverage their technology.”
- “Not only can you afford to store more data in the future, but in many ways, you can’t afford not too.”
- “Hadoop will get better.”
- “More things will get integrated … and that trend will continue.”
- “More and more data will move out of silo systems and into central systems that provide a variety of tools running on a variety of datasets … essentially an ‘enterprise data hub.’”
In other words, the future looks very much like Hadoop, a framework for data storage and processing on clusters of commodity hardware. In a way, that’s not shocking: The founder of Hadoop thinks the future will be one that uses and extends his baby more and more.
In another way, I expected more: perhaps some discussion or at least acknowledgement of newer frameworks like Spark. Or a discussion on in-memory processing like SAP HANA utilizes. Or some discussion of how Google seems to be moving past MapReduce, which in some sense inspired Hadoop, to Percolator and other technologies.
Prediction is actually quite simple, Cutting said.
Sure, when you’re not really stretching yourself to potentially startling new insights. Or when you’re only looking at your own now for your hoped-for future.