Google has just amped up the storage component of its burgeoning public cloud by offering to store companies’ data on fast solid-state drives (SSDs), instead of more conventional (and cheaper) hard disks.
At the same time, Google is being careful to roll out the feature — persistent drives that store data for the computing power that people rent by the minute — at a reasonable price. It will cost 32.5 cents per GB per month.
“… While other providers count each and every IOPS [in-output operations per second] and charge extra for them, SSD persistent disk includes IOPS in the base price with no extra charges or fees, making cost completely predictable and easy to model,” Google product management lead Tom Kershaw wrote in a blog post today on the announcement.
The move exemplifies Google’s pattern of innovation in the public cloud market. Market leader Amazon Web Services has not brought solid state drives to its block storage. To be fair, though, Amazon has been adding solid-state storage options in recent months.
But Google has the luxury of coming into the market seven years after Amazon. This feature — which we’d heard was on the way — is the kind of addition that will help companies feel more confident about moving to Google’s still nascent cloud. The arrival of more customers should also help a lot — and Google is making inroads — along with regular price cuts, but this improved storage tier is also a step forward.
The cloud business has been growing in recent years. Gartner forecast that the public cloud services market would grow 18.5 percent year-over-year from 2012 to 2013, coming out at $131 billion. Infrastructure as a service — which Google and Amazon provide — would account for $9 billion, according to Gartner’s forecast.
In addition to the storage news, Google also announced today new load-balancing features — for use across multiple Google data centers — that can ensure an application won’t buckle under a ton of web requests.
“This creates a truly global service offering and lets our customers optimize their compute resources and reduce latency on a global scale,” Kershaw wrote.