Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.


At today’s Google Cloud Applied ML Summit, the search and enterprise IT giant revealed a new group of product features and technology partnerships designed to help users create, deploy, manage and maintain machine learning (ML) models in production faster and more efficiently. 

The company’s AI development environment, Vertex AI, launched a year ago at the Google I/O 21 conference, is the home base for all of the updates. This is a managed ML platform designed to enable developers to speed up the deployment and maintenance of their AI models.

Google’s Prediction Service 

A central new addition to Vertex AI is its Prediction Service. According to Google Vertex AI Product Manager Surbhi Jain, its features include the following:

  • Prediction Service, a new integrated component of Vertex AI: “When users have a trained machine learning model and they are ready to start serving requests from it, that is where it comes into use. The idea is to make it absolutely seamless to enable safety and scalability. We want to make it cost-effective to deploy an ML model in production, irrespective of where the model was trained,” Jain said. 
  • A fully managed service: “The overall cost of service is low because Vertex AI is a fully managed service. That means we elevate the ops burden on you. Seamless auto-scaling reduces the need to over-provision hardware,” Jain said. 
  • A variety of VM and GPU types with Prediction Service: enables developers to pick the most cost-effective hardware for a given model. “In addition, we have many proprietary optimizations in our backend that further reduce cost as opposed to open source. We also have deep integrations that are built with other parts of the platform,” Jain said.
  • Out-of-the-box logging in Stackdriver: built-in integration for request-response logging in BigQuery are prebuilt components to deploy models from pipelines on a regular basis, Jain said. “What comprises a prediction service is also intelligence and assertiveness, which means we offer capabilities to track how the model is doing once it is deployed into production, but also understand why it is making certain predictions,” Jain said. (For context: Google Stackdriver is a freemium, credit-card required, cloud computing systems management service. It provides performance and diagnostics data to public cloud users.)
  • Built-in security and compliance: “You can deploy your models in your own secure perimeter. Our PCSA (pre-closure safety analysis) integration control tool has access to your endpoints and your data is protected at all times. Lastly, with private endpoints, Prediction Service introduces less than two milliseconds of overhead latency,” Jain said.

More capabilities and tools

Other new capabilities recently added to Vertex AI include the following: 

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now
  • Optimized TensorFlow runtime was released for public preview that allows serving TensorFlow models that are lower-cost and lower-latency than open-source prebuilt TensorFlow serving containers. Now optimized TensorFlow runtime lets users take advantage of some of the proprietary technologies and model optimization techniques that are used internally at Google, Jain said.
  • Google also launched custom prediction routines in private preview, making pre-processing the model input and post-processing the model output as easy as writing a Python function, Jain said. “We’ve also integrated it with Vertex SDK, which allows users to build their custom containers with their own custom predictors, without having to write a model server or having significant knowledge of Docker. It also lets users test the built images locally very easily. Along with this, we also launched support for CO hosting TensorFlow models on the same virtual machine. This is also in private preview at the moment,” Jain said. 

Other news notes:

  • Google released Vertex AI Training Reduction Server, which supports both Tensorflow and PyTorch. Training Reduction Server is built to optimize the bandwidth and latency of multinode distributed training on Nvidia GPUs. Google claims this significantly reduces the training time required for large language workloads, like BERT, and further enables cost parity across different approaches. In many mission-critical business scenarios, a shortened training cycle allows data scientists to train a model with higher predictive performance within the constraints of a deployment window. 
  • The company rolled out a preview of Vertex AI Tabular Workflows, which includes a Glassbox and managed AutoML pipeline that lets you see and interpret each step in the model-building and deployment process. Users ostensibly can train datasets of more than a terabyte without sacrificing accuracy, by picking and choosing which parts of the process they want AutoML to handle, versus which parts they want to engineer themselves. Glassbox is a software company that sells session-replay analytics software and services.
  • Google announced a preview of Serverless Spark on Vertex AI Workbench. This allows data scientists to launch a serverless spark session within their notebooks and interactively develop code.

Google’s graph data

In the graph data space, Google introduced a data partnership with Neo4j that connects graph-based machine learning models. This enables data scientists to explore, analyze and engineer features from connected data in Neo4j and then deploy models with Vertex AI, all within a single unified platform. With Neo4j Graph Data Science and Vertex AI, data scientists can extract more predictive power from models using graph-based inputs and get to production faster across use cases such as fraud and anomaly detection, recommendation engines, customer 360, logistics and more.

Google Vertex AI has also been integrated with graph database maker TigerGraph for several months; it’s a key part of the company’s Machine Learning (ML) Workbench offering.

Lastly, Google highlighted its partnership with Labelbox, which is all about helping data scientists use unstructured data to build more effective machine learning models on Vertex AI. 

Google claims that Vertex AI requires about 80% fewer lines of code to train a model versus competitive platforms, enabling data scientists and ML engineers across all levels of expertise the ability to implement machine learning operations (MLops) to efficiently build and manage ML projects throughout the entire development lifecycle.

Vertex competes in the same market as Matlab, Alteryx Designer, IBM SPSS Statistics, RapidMiner Studio, Dataiku and DataRobot Studio, according to Gartner Research.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.