Deep North (formerly VMAXX), a Silicon Valley startup with offices in China and Sweden, hopes to leverage artificial intelligence (AI) to prevent violence and “other safety issues” facing schools. Today it announced a program that will offer a select number of institutions the opportunity to field-test its threat-detecting object recognition and computer vision technology.
It’s already working with school districts and universities in Texas, Florida, Massachusetts, and California, and it has the backing of U.S. Congressperson Pete Sessions (R-TX). “AI represents one of the few viable ways to make schools safe, and does it in a way that is more affordable than any other,” Sessions said in a statement.
Not unlike Amazon Web Service’s Rekognition, IBM’s Watson Visual Recognition, and Microsoft’s Azure Face API, Deep North’s platform applies an intelligent layer to conventional, off-the-shelf security cameras (with resolutions as low as 320p), analyzing footage as it comes in. It monitors, detects, and interprets people’s in-frame behavior and movements across settings, and it identifies objects — e.g., unattended bags or objects that look like a weapon — that might pose a danger to students and staff.
School administrators receive alerts when a potential threat has been identified.
The patent-pending tech, which was originally engineered for brick-and-mortar retail, leverages cross-camera tracking that scans crowds and monitors “areas of special concern,” such as entrances, exits, and gathering areas. Deep North claims its technology doesn’t share any personally identifiable information of students or faculty (thanks to a numeric hashtag system based on physical characteristics) and said it can also be used to prevent abductions, “improve facilities layouts” and infrastructure, and manage foot traffic.
“It was both unexpected and eye-opening to see the value our video AI and deep learning expertise could also bring to securing schools,” said Deep North president and CEO Michael Adair. “Utilizing our solution, schools are able to automate and amplify the concept of ‘see something, say something’ in a way human security simply can’t match … The ability for a school to improve its safety and security without taking on steep costs or having to adopt stress-inducing measures such as metal detectors, is no small feat.”
It’s not the first system of its kind. Earlier this year, a high school in eastern China began testing an “intelligent classroom behavior management system” that uses facial recognition designed to analyze students’ engagement in real time. And a Paris business school is using artificial intelligence and facial analysis supplied by LCA Learning’s Nestor to determine whether students are paying attention in class.
Companies like Shielded Students, meanwhile, hope to employ cameras and integrated microwave radar scanners and computer vision software to identify guns and other hidden weapons in schools.
But such systems have their detractors, unsurprisingly. There’s little to no public data to assess whether AI-driven surveillance systems in schools work, critics say. And they point out that facial recognition AI is particularly susceptible to bias and false positives.
In July, the ACLU demonstrated that Amazon’s Recognition could, when calibrated a certain way, misidentify 28 sitting members of Congress as criminals. A study in 2012 showed that facial algorithms from vendor Cognitec performed 5 to 10 percent worse on African-Americans than on Caucasians. More recently, it was revealed that a system deployed by London’s Metropolitan Police produces as many as 49 false matches for every hit.
Rick Smith, CEO of Axon, which is one of the largest suppliers of body cameras in the U.S., was this summer quoted as saying that facial recognition isn’t yet accurate enough for law enforcement applications.
“[They aren’t] where they need to be to be making operational decisions off the facial recognition,” he said. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or … there’s some unintended use case where it ends up being unacceptable publicly in terms of long-term use of the technology.”
But Adair expressed confidence in the Deep North system’s accuracy — and its potential to do real good.
“We look forward to expanding our efforts with this program and helping more schools across the country enhance security, mitigate safety risks, and better protect their students and faculty for the long run,” he said. “We are proud to be leading the way in providing a behind-the-scenes, software-driven option that can truly make a difference in the near-term, as well as the long-term.”