Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Google today unleashed a series of updates to its data and AI platforms to help companies more efficiently harness the power of data and drive innovation.
The announcements, made at the virtual Google Cloud Data and AI Summit, included a new approach to running BigQuery, Google’s serverless data warehouse. The company said that BigQuery Editions would give customers more flexibility to operate and scale their data workloads. Google also unveiled data clean rooms, a service to keep data separate and anonymous.
In addition, Google launched AlloyDB Omni, a database service that handles transactions and analytics. First announced in May 2022, AlloyDB is a managed cloud database that is based on the open source PostgreSQL relational database.
Rather than just being focused on transactional workloads — which is what PostgreSQL supports by default — AlloyDB also has capabilities to support analytics workloads. To date, AlloyDB has only been available as a service running in the Google Cloud. That will change with AlloyDB Omni, which will provide organizations the ability to run the database wherever they want.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
New tools driven by customer demand
Rounding out Google’s new product announced is the Looker Modeler service. Looker is a business intelligence (BI) technology that Google acquired for $2.6 billion in 2020. The Modeler service provides a new way for organizations to define and access business metrics.
In a press briefing, Gerrit Kazmaier, Google Cloud GM and VP of data analytics, noted that the new updates are driven by customer requests.
“One was an increased need for flexibility specifically now in the current year with all of its challenges,” said Kazmaier. “They’re asking for help to optimize for both their predictable and unpredictable data needs.”
BigQuery gets smart about scaling
Flexibility where users pay for what they use is an original promise of the cloud. It’s a promise that Google is helping to deliver on with the BigQuery Editions update.
Kazmaier said that BigQuery Editions offers multiple tiers of service with different feature set capabilities per tier, that customers can choose and select from. Organizations can also choose to mix and match tiers for individual workloads.
The new flexibility that BigQuery Editions provides is enabled by a few underlying infrastructure capabilities enhancements from Google for storage and auto-scaling. Kazmaier explained that BigQuery compressed storage provides access to data in a highly compressed format using a proprietary multistage compression process. The end result is that organizations will be able to store more data for less cost.
New auto-scaling capability
The flexibility provided by BigQuery Editions is also enabled by way of a new auto-scaling capability for workloads. Kazmaier noted that Google built out a new resource scheduler as part of the BigQuery Editions infrastructure for doing query planning and execution. He explained that a query basically can get compute resources on the fly, as it processes operations.
Kazmaier also provided an update on the BigQuery ML service, which first became available in 2019. BigQuery ML integrates the data warehouse with machine learning (ML), such that organizations can use the data of AI model development.
Over the last year, Kazmaier said that Google has increased its focus on making machine ML accessible at scale and helping organizations connect it with their own data. A day ahead of the summit on March 28, Google announced an incremental update to BigQuery ML, allowing inference to be done using remotely hosted models, not just models that are directly integrated with the BigQuery service.
Google breaks AlloyDB out of its cloud
A cloud database like AlloyDB, by definition, will typically only reside in the cloud, but that’s not always what organizations want or need.
During the press briefing, Andi Gutmans, VP and GM of Databases at Google, commented that many organizations want to run databases in different clouds and some still have a need to run on-premises. There can also be a fear among some users that having a technology only available to run in a single cloud provider can lead to a lock-in risk. The AlloyDB Omni database is an effort to answer that challenge by enabling users to run the database wherever they want.
This isn’t the first time that Google has unshackled one of its data technologies from its own cloud platform. In 2021, Google launched BigQuery Omni, which enables data queries to be run across multiple cloud providers. While BigQuery Omni enables multi-cloud support, the AlloyDB Omni is going a little further, by allowing users to download a full container image of the database. The container can be run in any environment that will support containers, whether that’s on-premises or another cloud provider.
The idea of removing the fear of lock-in also extends to Google’s views on the open source foundation of AlloyDB Omni, which is the PostgreSQL database.
“We want customers to be able to run on any PostgreSQL, whether that is AlloyDB or without us,” said Gutmans. “With any work that we do, including differentiated work, our goal is to really make sure that there is compatibility out there.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.