Microsoft president Brad Smith today said the company will not sell facial recognition to police in the United States until there’s a “national law in place grounded in human rights that will govern this technology,” he told Washington Post Live. A Microsoft spokesperson confirmed Smith’s statement and offered no additional details.

With IBM’s exit from the facial recognition business at the start of the week, Amazon and Microsoft were two of the biggest companies known to still make facial recognition tech available to police departments and government agencies. Then Amazon announced on Wednesday that it will pause sales to police for one year.

Smith repeated a call made by Amazon, IBM, and a number of privacy and racial justice advocates in recent years for Congress to pass national facial recognition regulation. Smith first called for facial recognition regulation from Congress nearly two years ago. Microsoft has played a role in crafting privacy and facial recognition regulation in places like California and Microsoft’s home state of Washington.

“[I]f all of the responsible companies in this country cede this market to those that are not prepared to take a stand, we won’t necessarily serve the national interest or the lives of the black and African people of this nation as well. We need Congress to act, not just tech companies alone. That’s the only way we will guarantee that we will protect the lives of people,” Smith said today.

Amazon and Microsoft may have temporarily halted sales to police, but several smaller or lesser known companies continue to sell facial recognition to police.

Members of Congress have put forward a range of bills addressing the regulation of AI, but no facial recognition regulation like the kind that received bipartisan support in the House of Representatives earlier this year and in hearings dating back to May 2019. No such bill has emerged to become a law.

Former House Oversight and Reform Committee chair Elijah Cummings (D-MD) passed away last year, but he summed up some basic tenets of facial recognition regulation in a statement provided to VentureBeat before his death by congressional staff. “I believe there should be front-end accountability for law enforcement’s use of facial recognition technology. I also believe that people should be informed of their participation in a facial recognition technology system and should be able to ‘opt-in’ when possible,” Cummings said. “This technology is evolving extremely rapidly, without any [real] safeguards, whether we are talking about commercial use or government use. There are real concerns about the risks that this technology poses to our civil rights and liberties, and our right to privacy.”

Cummings’ concern with facial recognition stemmed in part from use of facial recognition during protests in Baltimore after the killing of Freddie Gray by police in 2015. Other committee members and experts testifying before Congress expressed concern with facial recognition use at political rallies, tracking people’s movement with live facial recognition, and the possibility of a chilling effect on protests and freedom of speech.

An assessment of police use of facial recognition by the Georgetown University’s Center on Privacy and Technology found that roughly half of all U.S. adults are included in a facial recognition network used by law enforcement and that one in four police officers have access to the technology. However, there are few or no guidelines, and the proliferation of facial recognition adversely impacts African-Americans.

In a June 2019 testimony before the committee, a Government Accountability Office (GAO) official testified that the FBI had not yet complied with a number of recommendations made in 2015 for the audit and assessment of facial recognition and image data provided by about 20 states.


The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here