In a bid for dominance in the multibillion-dollar AI software-as-a-service (SaaS) market, Microsoft’s beefing up its suite of cloud-hosted machine learning tools — Azure Cognitive Services — with two additions that were previously available only to select customers. It today announced the public preview of Anomaly Detector, which aims to intelligently suss out unusual activity in millions of data transactions, and the general availability of Custom Vision, which facilitates the training and deployment of object-detecting AI models.

“From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario,” wrote Microsoft chief of staff Anand Raman, in a blog post. He added that more than a million developers have tried Cognitive Services to date. “As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications.”

Anomaly Detector is designed to identify unusual, rare, or irregular data patterns that might signal problems — like credit card fraud, for instance, or a compromised network node. It’s available through a single API and runs in real time, and currently, over 200 teams across Azure and other “core” Microsoft products currently rely on it to “boost the reliability” of their systems, Raman says.

“Through a single API, developers can easily embed anomaly detection capabilities into their applications to ensure high data accuracy, and automatically surface incidents as soon as they happen,” he said. “Common use case scenarios include identifying business incidents and text errors, monitoring IoT device traffic, detecting fraud, responding to changing markets, and more.”

Today’s other big announcement? Custom Vision, Azure’s AI-driven image classification product, has exited preview. It allows developers to train their own real-time object classifiers and export them to run offline on iOS (with Apple’s CoreML toolkit), Android (in Google’s TensorFlow machine learning framework), and other edge devices, and the latest release packs key improvements.

Among them are an “advanced training” feature that taps a high-performance backend optimized for “challenging datasets” and “fine-grained” classification. In addition, developers can now specify a compute time budget and experimentally identify the best training and augmentation settings, as well as take advantage of support for Azure Resource Manager (ARM) for Raspberry Pi 3 and the Vision AI Dev Kit to seed models to low-power devices.

Custom Vision pricing starts at $2 per 1,000 transactions, while training costs $20 per compute hour and image storage is $0.70 per 1,000 images. The free tier includes 5,000 training images per project (up to a maximum of two projects and an hour of training per month) and 10,000 predictions per month.

“Today’s milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development, and industry-leading security and compliance for protecting customers’ data,” Raman wrote.

Microsoft also today announced the expansion of Azure Stack, its hybrid cloud computing software solution, to over 92 countries and introduced hyperconverged infrastructure (HCI) support, allowing customers to run virtualized apps on-premises with direct access to Azure services such as backup and recovery. Lastly, it made generally available Azure Data Box Edge, an Intel Arria 10 FPGA-powered appliance for edge containers that processes and accelerates machine learning workloads, and the Azure Data Box Gateway virtual appliance.