Google is adding a new automated closed captions feature to its Google Slides presentation program, one that creates real-time subtitles from the spoken word.

The feature is rolling out globally from today; however, it will be available in U.S. English only at first.

The new feature is broadly designed to help those who are deaf or hard-of-hearing, and the general idea is that those who are presenting to a roomful of people can augment the written words that are already part of their slides with closed captions of their accompanying verbal presentation too.

How it works

Just before you start the presentation, hit the little “CC” (closed captions) button in the navigation box (you can also use the keyboard shortcut “Ctrl-Shift-C” in Windows and Chrome OS or “⌘-Shift-C” on Mac machines). Google Slides will then access your computer’s built-in microphone to hear your voice, and then automatically convert it into text at the bottom of your presentation.

Above: Google Slides: Closed captions

Though the primary target audience with this new feature is those with some form of hearing loss, Google said that it anticipates its use-cases extending far beyond that. For example, an auditorium may be noisy, or a presenter may not be projecting their voice well enough — automated closed captions should go some way toward helping everyone understand what a presenter is saying.

“The fact that the feature was built primarily for accessibility purposes but is also helpful to all users shows the overall value for everyone of incorporating accessible designs and features into products,” the company said in a blog post.

Speech recognition

Google already offers a bunch of speech recognition-powered features across its various products. Google Docs, for example, lets you edit and format text using your voice, while voice typing is also available through its mobile keyboard app Gboard. And Android TV users can search for content using natural language voice searches. With the rise of smart virtual assistants, technology giants are fighting to get their voice-activated assistants into as many hands as possible, and in Google’s case its Google Assistant is updated with new intelligence features almost on a weekly basis.

Making products more accessible is another key trend among technology companies, with around 15 percent of people around the world experiencing some form of disability, according to World Bank data. That works out at around 1 billion people. Last month, for example, Google revealed it’s finally bringing native hearing aid support to Android, an oft-requested feature from the hard-of-hearing community.

So mixing speech recognition with accessibility considerations feels like an obvious step for Google, given its recent and current areas of focus.

It’s also worth noting here that nobody enjoys transcribing, which is why we’ve seen a ton of auto-transcription services roll out lately. Startup AISense recently updated its voice recording app with a a new feature for automatically transcribing live events, while Zoom also now uses AI to automatically transcribe videoconferences. Microsoft is also investing heavily in speech-to-text services to improve its own suite of cloud-based tools.

The new Google Slides feature is available only through desktop or laptop computers for now, and plans are afoot to expand this into more languages in the future.