An Association for Computing Machinery (ACM) tech policy group today urged lawmakers to immediately suspend use of facial recognition by businesses and governments, citing documented ethnic, racial, and gender bias. In a letter (PDF) released today by the U.S. Technology Policy Committee (USTPC), the group acknowledges the tech is expected to improve in the future but is not yet “sufficiently mature” and is therefore a threat to people’s human and legal rights.
“The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods, and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society,” the letter reads.
Organizations studying use of the technology, like the Perpetual Lineup Project from Georgetown University, conclude that broad deployment of the tech will negatively impact the lives of Black people in the United States. Privacy and racial justice advocacy groups like the ACLU and the Algorithmic Justice League have supported halts to the use of the facial recognition in the past, but with nearly 100,000 members around the world, ACM is one of the biggest computer science organizations in the world. ACM also hosts large AI annual conferences like Siggraph and the International Conference on Supercomputing (ICS).
The letter also prescribes principles for facial recognition regulation surrounding issues like accuracy, transparency, risk management, and accountability. Recommended principles include:
- Disaggregate system error rates based on race, gender, sex, and other appropriate demographics
- Facial recognition systems must undergo third-party audits and “robust government oversight”
- People must be notified when facial recognition is in use, and appropriate use cases must be defined before deployment
- Organizations using facial recognition should be held accountable if or when a facial recognition system causes a person harm
The letter does not call for a permanent ban on facial recognition, but a temporary moratorium until accuracy standards for race and gender performance, as well as laws and regulations, can be put in place. Tests of major facial recognition systems in 2018 and 2019 by the Gender Shades project and then the Department of Commerce’s NIST found facial recognition systems exhibited race and gender bias, as well as poor performance on people who do not conform to a single gender identity.
The committee’s statement comes at the end of what’s been a historic month for facial recognition software. Last week, members of Congress from the Senate and House of Representatives introduced legislation that would prohibit federal employees from using facial recognition and cut funding for state and local governments who chose to continue using the technology. Lawmakers on a city, state, and national level considering regulation of facial recognition frequently cite bias as a major motivator to pass legislation against its use. And Amazon, IBM, and Microsoft halted or ended sale of facial recognition for police shortly after the height of Black Lives Matter protests that spread to more than 2,000 cities across the U.S.
Citing race and gender bias and misidentification, Boston became one of the biggest cities in the U.S. to impose a facial recognition ban. That same day, people learned the story of Detroit resident Robert Williams, who is thought to be the first person falsely arrested and charged with a crime because of faulty facial recognition. Detroit police chief James Craig said Monday that facial recognition software that Detroit uses is inaccurate 96% of the time.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here