Google’s Spanner is a single database that runs across hundreds of data centers throughout the world. It’s so smart that it rapidly shifts resources during outages without human intervention and keeps everything perfectly in sync using GPS and atomic clocks.
CloudBeat 2012 assembles the biggest names in the cloud’s evolving story to uncover real cases of revolutionary adoption. Unlike other cloud events, the customers themselves are front and center. Their discussions with vendors and other experts give you rare insights into what really works, who’s buying what, and where the industry is going. CloudBeat takes place Nov. 28-29 in Redwood City, Calif. Register today!
While Spanner was introduced a few months back, a detailed profile from Wired today explores more about how Spanner works and why it’s revolutionary. Google has spent four and half years on Spanner, and it’s hard to imagine where the project will be in a few more years.
Here are four big points that show what makes Spanner so cool:
1. Spanner replicates Google’s data across multiple data centers, and Google services can pull from these replicas as needed.
2. Because Spanner can replicate data so easily, it makes Google infrastructure more resistant to “network delays, data-center outages, and other software and hardware snafus.”
3. To get the best timing accuracy possible, Google installed GPS antennas on top of its many data centers and connected them to the millions of machines inside those centers. In case the GPS fails to keep accurate time (during an extreme weather event, for example), there are atomic clocks there too to keep time perfect.
4. Google’s ad network (which generates the vast majority of the company’s cash) benefits greatly from Spanner. These auctions need to have incredibly precise time, especially with some auctions decided by milliseconds.
Spanner isn’t quite Skynet — the self-aware artificial intelligence system in the Terminator movies — but it does show how far we’ve come at building connected systems and databases.
“When there are outages, things just sort of flip — client machines access other servers in the system,” Google software engineer Andrew Fikes told Wired. “It’s a much easier service story. … The system responds — and not a human.”
While Google Spanner is impressive, TransLattice claimed a few months back that its globally available, geographically distributed multinode database has been available for two years. However, TransLatice’s system appears to be much smaller in scope.
And then we have Facebook building its Prism software platform that will reimagine how “big data” scales. That’s also a little scary to have Facebook knowing so much about its more than 1 billion users.
Thinking about all these powerful global databases, I’ll have to consider moving into an underground bunker. I’ll climb out of the bunker briefly for CloudBeat this week in Redwood City if you want to talk more about what’s happening on the ground with cloud services and big data. (Speaking of which, CloudBeat is a great place to come anyway: We’ve got some great conversations around database technology and big data, from what MySQL-companies MongoDB and AgilOne are doing for clients like Bosch and the Church of Latter Day Saints, to how VMware is helping equip Oxford University with a database as a service).
Terminator photo: Terminator 3: Rise of the Machines/Warner Brothers