Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.

Database management is a thorn in the side of engineers everywhere, but it doesn’t have to be. That’s Google’s pitch — today during its annual Cloud Next conference in San Francisco, it announced new products and services aimed at simplifying the orchestration of transactional, operational, and analytics data stores and data warehouses.

“Moving to the cloud doesn’t have to mean starting over,” Google Cloud director of product management Dominic Preuss and product management lead Tobias Ternstrom wrote in a blog post. “At Google Cloud, we’re committed to giving our customers choices in how you run your enterprise workloads.”

First on the list is Cloud SQL for Microsoft SQL Server, which will launch later this year. When it does, it’ll allow customers to bring existing, on-premises SQL Server workloads to Google Cloud Platform (GCP) and run them in a fully managed database service that autonomously handles backups, replication, patches, updates, and more. It will preserve existing apps and data, and moreover, it’ll afford those apps access to GCP services like BigQuery for analytics and AI.

Also making its debut today is CloudSQL for PostgreSQL version 11, which includes nifty enhancements like superior partitioning, stored procedures, and more parallelism. (PostgreSQL in CloudSQL became generally available last year.) And starting this week, Google is rolling out Cloud Bigtable multi-region replication broadly. Every cluster in a replicated instance accepts both reads and writes, and replication can be set up automatically with the addition of one or more clusters.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

For the uninitiated, ​Cloud Bigtable​ is Google’s NoSQL key-value and wide-column database service designed for petabyte-size workloads. At its core is Bigtable, which the Mountain View company detailed in an academic paper in 2006. (Bigtable plays a part in Google consumer-facing services like Gmail and Google search.) It’s by and large comparable to Amazon’s DynamoDB, Microsoft’s Azure DocumentDB, IBM’s Cloudant, and others, which represent a large and growing chunk of the overall database management system (DBMS) market. According to a report published by Allied Market Research, NoSQL services will generate a collective $4.2 billion in revenue by 2020, and analysts at Forrester peg the segment’s growth at 25 percent from 2015 to 2021.

“No matter where your users are and how much data you have, it should be easy and straightforward to manage, move and access that data when you need it,” Preuss and Ternstrom wrote. “When you’re running your workloads with managed  database services, you can focus your attention on what that data can do for your business, not the underlying infrastructure.”

Lastly, Google announced Cloud Dataflow SQL​ (coming soon in public alpha), which allows devs to build their own Dataflow pipelines using SQL and automatically detects the need for batch or stream data processing, and Dataflow Flexible Resource Scheduling (in beta), which offers preemptible resource pricing for batch processing jobs through scheduling flexibility.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.