Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Today at Google’s Cloud Next conference in San Francisco, the Mountain View company announced it’s expanding Cloud AutoML — the machine learning platform it announced at Google I/O last year — into new domains.

Starting this week, AutoML Vision, a graphical drag-and-drop tool that lets users leverage Google’s cloud computing backend to train self-learning object recognition and image detection models, is exiting alpha and entering public beta.

Google revealed that since January, around 18,000 customers have expressed interest in AutoML Vision.

The idea behind it and Cloud AutoML, its umbrella service, is to provide organizations, researchers, and businesses who require custom machine learning models a simple, no-frills way to train them, Google said. To that end, it’s expanding AutoML to natural language processing (with AutoML Natural Language) and translation (with AutoML Translate).

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Already, Hearst is using AutoML Natural Language to help organize content across its domestic and international magazines, and Japanese publisher Nikkei Group is leveraging AutoML Translate to publish articles in different languages.

“AI is empowerment, and we want to democratize that power for everyone and every business — from retail to agriculture, education to healthcare,” Fei-Fei Li, chief scientist of Google AI, said in a statement. “AI is no longer a niche in the tech world — it’s the differentiator for businesses in every industry. And we’re committed to delivering the tools that will revolutionize them.”

New Cloud AutoML services aren’t all Google announced this morning. It’s updating existing APIs including Cloud Vision API, which will soon recognize handwriting, support PDFs and TIFF files, and recognize where an object is located within an image. And on the hardware front, the third generation of Google Cloud TPUs is available in alpha.

Google also unveiled Contact Center AI, a machine learning-powered customer representative built with Google’s Dialogflow package that interacts with callers over the phone. The company is marketing it as a toolkit for conversational agents.

Contact Center AI, when deployed, fields incoming calls and uses sophisticated natural language processing to suggest solutions to common problems. If the virtual agent can’t solve the caller’s issue, it hands him or her off to a human agent — a feature Google’s calling “agent assist” — and presents the agent with information relevant to the call at hand.

Google said it’s working with existing customers to “engage with us around the responsible use” of AI. “We want to make sure we’re using technology in ways employees and users will find fair, empowering, and worthy of their trust,” Li wrote.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.