Google announced today that its Cloud Video Intelligence API is generally available, along with a new Content Classification feature for its Cloud Natural Language API. The updates are aimed at giving customers new capabilities for making their applications smarter using prebuilt machine learning systems.

As their names imply, the two services are designed to provide developers with tools that they can use to make applications understand the content of videos and text. That, in turn, is supposed to help with the creation of more intelligent apps that would have previously been the domain of the tech titans.

In addition to becoming generally available, the Cloud Video Intelligence API can now be used to transcribe the contents of video fed into it, along with its existing support for detecting objects within footage, spotting shot changes in a single video, and spotting explicit content in footage.

The Cloud Natural Language API’s Content Classification feature will provide developers with information about what category a document fits into, with support for classifying into buckets like Arts & Entertainment, Hobbies & Leisure, Law & Government, News, and Health. That’s useful for taking a large database of different content and automatically ensuring that it’s tagged with the right markets for filing and distribution.

While this is an incremental release, it comes at a time of heavy competition among cloud providers in the realm of prebuilt machine learning APIs. The release comes roughly a week after Amazon Web Services announced a new set of machine learning APIs, including services for video intelligence and natural language understanding.

One of Google Cloud’s key assets as it tries to catch up with other players in the cloud wars is the perception that the company has a lead in the realm of artificial intelligence and machine learning. Continuing to release features for its intelligence services is key to maintaining that lead, especially as other providers like AWS have a larger share of the market.