Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Microsoft is continuing to build out the capabilities of its Azure Cognitive Services platform, and the company unleashed a firehose of announcements large and small across its product stack at Ignite 2019. As sometimes happens with that approach, there’s no apparent overarching theme to the Cognitive Services updates. Instead, Microsoft is constantly iterating on its AI-powered search, vision, language, speech, and decision-making tools, as it has done since Azure Cognitive Services (under its current branding) emerged in 2016.

Azure Cognitive Search has a simple mission: “Use AI to solve business problems.”

It is designed to help companies more easily search their own structured and unstructured data and is powered by vision, language, and speech APIs. Today’s news includes APIs for data connectors to Azure Data Lake Store, MongoDB, and Cassandra, new built-in translation and power skills, and availability in more regions (although Microsoft did not say what those regions are).

Personalizer, which uses reinforcement learning to give your app’s users personalized experiences, had been in preview but is now generally available. It promises to give devs a user-friendly interface to manage the reinforcement learning loop, and you ostensibly don’t need machine learning expertise to run it. It’s available in Azure or on-premises.

There’s also now support for Hindi and Arabic in Language Understanding — a tool that lets you add natural language understanding to apps, bots, and IoT devices and operates on the edge, on premises, or in the cloud via containers.

The rest of the Cognitive Services updates — in Speech, Text Analytics, Vision, and security — are all launching in preview.

The Speech area added Custom Neural Voice, which lets customers use deep neural networks (DNNs) and their own training audio to create personalized voices. It also added Custom Speech, which allows customers to create custom speech models from their own Office 365 data. “Additional new capabilities, such as Custom Commands, Custom Speech, and Custom Voice containers, Speech Translation with automatic language identification, and streamlined integration with Bot Framework, are making it easier to quickly embed advanced speech capabilities into your apps,” reads Microsoft release materials.

Form Recognizer, which uses machine learning to extract data from documents, got a new feedback loop capability that lets you create custom tags for form extraction. Additional new features in Text Analytics enhance those capabilities by adding:

  • The ability to detect and extract personally identifiable information in documents.
  • Enhancements to its sentiment analysis capability, with significant improvements in text categorization and scoring.
  • Expanded entity type support for more than 100 named entity types in five languages and more than 20 named entity types in 16 other languages.

In an effort to provide greater security to Cognitive Services, Microsoft is also adding support from Azure Virtual Network (VNET), a private cloud offering.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.