Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Amazon supports the creation of a “legislative framework” covering facial recognition technology. That’s according to Michael Punke, vice president of global public policy at Amazon’s AWS division, who penned a blog post this week outlining proposed guidelines for the “responsible use” of face-classifying software by private, commercial, and government entities.
“Over the past several months, we’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks,” Punke wrote. “It’s critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology … We encourage policymakers to consider these guidelines as potential legislation and rules are considered in the U.S. and other countries.”
Some of the guidelines — of which there are six — seem based on common sense. For instance, Amazon proposes that facial recognition technology comply with “all laws,” including laws that protect civil rights, and that law enforcement agencies employing facial recognition be “transparent” about their use and detail privacy “safeguards” in regular reports. It also recommends that in instances when law enforcement uses AI to identify people of interest in an investigation the confidence threshold — the minimum precision that a system’s predictions must achieve in order to be considered “correct” — be set to 99 percent.
The other guidelines reflect a growing consensus among academics and advocacy groups that transparency — and recognition of current technology’s limitations — will be key in the successful deployment of facial recognition services. Amazon proposes that parties give notice when they’ve deployed facial recognition tech in “public or commercial premises,” such as shopping centers and restaurants. And the company says facial recognition shouldn’t be used to make “fully automated, final decisions” that might violate a person’s rights. “For example, for any law enforcement use of facial recognition to identify a person of interest in a criminal investigation, law enforcement agents should manually review the match before making any decision to interview or detain the individual,” Punke wrote. “In all cases, facial recognition matches should be viewed in the context of other compelling evidence, and not be used as the sole determinant for taking action.”
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Today’s blog post comes weeks after researchers at the Massachusetts Institute of Technology published a study that found Rekognition, Amazon Web Services’ (AWS) object detection API, failed to reliably determine the sex of female and darker-skinned faces in specific scenarios. The study’s coauthors claimed that in experiments conducted over the course of 2018, Rekognition’s facial analysis feature mistakenly identified pictures of woman as men and darker-skinned women as men 19 percent and 31 percent of the time, respectively.
Amazon disputed — and continues to dispute — those findings. It says that internally, in tests of an updated version of Rekognition, it observed “no difference” in gender classification accuracy across all ethnicities. And it notes that the paper in question failed to make clear the confidence threshold used in the experiments.
That’s not to suggest it’s an isolated problem.
A study in 2012 showed that facial algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians, and researchers in 2011 found that facial recognition models developed in China, Japan, and South Korea had difficulty distinguishing between Caucasian faces and those of East Asians. In February, researchers at the MIT Media Lab found that facial recognition programs made by Microsoft, IBM, and Chinese company Megvii misidentified gender in up to 7 percent of lighter-skinned females, up to 12 percent of darker-skinned males, and up to 35 percent of darker-skinned females.
Call to action
Amazon, for its part, says it’s continually working to improve the accuracy of Rekognition by making funding available for research projects and staff through the AWS Machine Learning Research Grants — most recently through a “significant update” in November 2018. (The company says it’s now on its fourth significant version update of Rekognition.) And it says it is “interested” in establishing standardized tests for facial analysis and facial recognition and in working with regulators to guide the technology’s use.
Amazon is not the only company expressing willingness to address these issues.
In a recent blog post announcing support for the Asia Pacific AI for Social Good Research Network and highlighting Google’s efforts to use artificial intelligence (AI) to combat disease and natural disasters, Kent Walker, senior vice president of global affairs, wrote that Google wouldn’t offer a “general-purpose” facial recognition API through Google Cloud until the “challenges” had been “identif[ied] and address[ed].”
And late last year at an event in Washington, D.C. hosted by the Brookings Institution, Microsoft president Brad Smith proposed that people should review the results of facial recognition in “high-stakes scenarios,” such as when they might restrict a person’s movements. He added that groups using facial recognition should comply with anti-discrimination laws regarding gender, ethnicity, and race and that companies need to be “transparent” about AI’s limitations.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.