Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google’s AI Platform, a cloud-hosted service facilitating machine learning and data science workflows, today gained a new feature in backend models that tap powerful Nvidia graphics chips. In related news, Google debuted a refreshed model training experience that allows users to run a training script on any range of hardware.
For the uninitiated, AI Platform enables developers to prep, build, run, and share machine learning models quickly and easily in the cloud. Using built-in data labeling services, they’re able to annotate model training images, videos, audio, and text corpora by applying classification, object detection, and entity extraction. A managed Jupyter Notebook service provides support for a slew of machine learning frameworks, including Google’s TensorFlow, while a dashboard within the Google Cloud Platform console exposes controls for managing, experimenting with, and deploying models in the cloud or on-premises.
Now, AI Platform Prediction — the component of AI Platform that enables model serving for online predictions in a serverless environment — lets developers choose from a set of machine types in Google’s Compute Engine service to run a model. Thanks to a new backend built on Google Kubernetes Engine, they’re able to add graphics chips like Nvidia’s T4 and have AI Platform Prediction handle provisioning, scaling, and serving. (Online Prediction previously only allowed you to choose from one or four vCPU machine types.)
Additionally, prediction requests and responses can now be logged to Google’s BigQuery, where they can be analyzed to detect skew and outliers.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
As for AI Platform Training — which allows data scientists to run a training script on a variety of hardware, without having to manage the underlying machines — it now supports custom containers, letting researchers launch any Docker container so that they can train a model with any language, framework, or dependencies. Furthermore, AI Platform Training gained Compute Engine machine types for training, which allows for the piecemeal selection of any combination of CPUs, RAM, and accelerators.
“Cloud AI Platform simplifies training and deploying models, letting you focus on using AI to solve your most challenging issues … From optimizing mobile games to detecting diseases to 3D modeling houses, businesses are constantly finding new, creative uses for machine learning,” wrote Cloud AI Platform product manager Henry Tappen in a blog post. “With more inference hardware and training software choices, we look forward to seeing what challenges you use AI to tackle in the future.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.