Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
In May 2018 — during its annual Build developer conference in Seattle — Microsoft announced a partnership with Qualcomm to create what it described as a developer kit for computer vision applications. This effort resulted in the Vision AI Developer Kit, a hardware base built on Qualcomm’s Vision Intelligence Platform designed to run AI models locally and integrate with Microsoft’s Azure ML and Azure IoT Edge cloud services, which became available to select customers last October.
Today, Microsoft and Qualcomm announced that the Vision AI Developer Kit (made by eInfochips) is now broadly available from distributor Arrow Electronics for $249. A software development kit containing Visual Studio Code with Python modules, prebuilt Azure IoT deployment configurations, and a Vision AI Developer Kit extension for Visual Studio is on GitHub, along with a default module that recognizes upwards of 183 different objects.
Microsoft principal project manager Anne Yang notes that the Vision AI Developer Kit can be used to create apps that ensure every person on a construction site is wearing a hardhat, for instance, or detect whether products on shelves are out of stock. “AI workloads include megabytes of data and potentially billions of calculations,” Yang wrote in a blog post. “In hardware, it is now possible to run time-sensitive AI workloads on the edge while also sending outputs to the cloud for downstream applications.”
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Programmers tinkering with the Vision AI Developer Kit can tap Azure ML for AI model creation and monitoring and Azure IoT Edge for model management and deployment. They’re able to build a vision model by uploading tagged pictures to Azure Blob Storage and letting Azure Custom Vision Service do the rest or by employing Jupyter notebooks and Visual Studio Code to devise and train custom vision models using Azure Machine Learning (AML) and then converting the trained models to DLC format and packaging them into an IoT Edge module.
The Vision AI Developer Kit — which runs Yocto Linux — has a Qualcomm Snapdragon 603 at its core, paired with 4GB of LDDR4X memory and 64GB of onboard storage. An 8-megapixel camera sensor capable of recording in 4K UHD handles footage capture duties, while a four-microphone array captures sounds and commands. The kit connects via Wi-Fi (802.11b/g/n 2.4Ghz/5Ghz), but it has an HDMI out port, audio in and out ports, and a USB-C port for data transfer, in addition to a microSD card for additional storage.
The Snapdragon Neural Processing Engine (SNPE) within Qualcomm’s Vision Intelligence 300 Platform powers on-device execution of the containerized Azure services, making the Vision AI Developer Kit the first “fully accelerated” platform supported end-to-end by Azure, according to Yang. “Using the Vision AI Developer Kit, you can deploy vision models at the intelligent edge in minutes, regardless of your current machine learning skill level,” she said.
The Vision AI Developer Kit has a rival in Amazon’s AWS DeepLens, which lets developers run deep learning models locally on a bespoke camera to analyze and take action on what it sees. For its part, Google recently made available the Coral Dev Board, a hardware kit for accelerated AI edge computing that ships alongside a USB camera accessory.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.