The San Francisco Board of Supervisors voted 8-1 today to approve the Stop Secretive Surveillance ordinance, which outlaws the use of facial recognition software or retention of information obtained through facial recognition software systems. A second reading and vote will take place at a May 21 Board of Supervisors meeting to officially approve or reject the ordinance, according to the city clerk’s office.

Supervisor Catherine Stefani, the sole vote against the ordinance, said amendments fail to address her questions or concerns related to public safety. Once passed, San Francisco will become the first city in the United States to outlaw the use of facial recognition software by city departments, including the San Francisco Police Department.

“The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring,” the ordinance reads.

Ordinance author Supervisor Aaron Peskin called facial recognition a “uniquely dangerous technology” and cited facial recognition software being used to track the Uighur population in Western China and an ACLU test of Amazon’s Rekognition that misidentified 28 members of Congress as criminals.

Peskin called the ordinance an attempt to balance security with the need to guard against a surveillance state.

“This is really about saying [that] we can have security without being a security state. We can have good policing without being a police state,” he said.

The legislation, which amends city administrative code, will require city departments to create policy for surveillance technology use. City departments are also required to submit annual surveillance reports that explain how they use devices like license plate readers, drones, or sensor-equipped streetlights.

Acquisition of new surveillance technology will require approval by the Board of Supervisors, and if new tech is approved, city departments will be required to adopt “data reporting measures” to “empower the Board of Supervisors and the public to verify that mandated civil rights and civil liberties safeguards have been strictly adhered to.”

Several human rights, privacy, and racial justice organizations supported the ordinance, citing deadly interactions with police that have occurred in the San Francisco Bay Area in recent years.

A group that sent a joint letter in support of the ordinance last month includes the ACLU of Northern California, the Asian Law Alliance, the Council on American Islamic Relations, Data for Black Lives, Freedom of the Press Foundation, and the Transgender Law Center.

During a Rules Committee meeting last month, many group members cited audits of facial recognition software that found deficits in their ability to recognize women and people of color. That same criticism has been frequently lobbed at companies like Amazon and Microsoft, who in the past year have tested or sold their facial recognition AI to law enforcement and government agencies.

Others who spoke in support of the ordinance talked about fears of misuse of such tech not just by local police, but by the Department of Homeland Security’s ICE, which detains people who are in the United States without a visa, citizenship, or green card. San Francisco is a sanctuary city.

The ordinance provides no specific definition of public input city departments should seek with regard to use of surveillance tech — it just states the need for public hearings.

The group opposed an exemption for the San Francisco County Sheriff’s Department and District Attorney if either can provide the City Controller with justification in writing for why new surveillance tech is necessary to carry out prosecution. Exemptions can also be made in life-threatening exigent circumstances.

Some people oppose the ordinance as written due to concern that video or information obtained through the use of private video cameras that deploy facial recognition software could not be shared with police without approval.

More than a dozen letters were sent to the Board of Supervisors by members of the group Stop Crime SF requesting an amendment to portions related to sharing video with police.

“Many in our residential and commercial neighborhoods have private security cameras whose video footage is readily available to the SFPD to support their efforts to catch criminals, especially auto burglars and package thieves. Supporting the SFPD is the primary — if not the only — reason why we have these private video cameras,” local resident Peter Fortune said in a letter.

The Stop Secret Surveillance ordinance was first proposed in January by Supervisor Aaron Peskin. Cosponsors include Supervisor Shamann Walton, who represents the historically African American Bayview-Hunters Point neighborhood, and Supervisor Hillary Ronen, who represents the historically Latinx Mission District.

The passage of the ordinance comes as a number of government bodies are forming their own policies for the acquisition or use of AI systems.

A bipartisan group of U.S. senators last week resubmitted the AI in Government Act, which is aimed at creating federal standards. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is also exploring the formation of federal standards as part of orders from Trump’s American AI initiative executive order.

Outside the U.S., the European Commission recently enacted an AI ethics pilot program, and the World Economic Forum will convene its first Global AI Council later this month.

Only 33 of 193 United Nations member nations have enacted national AI policies, according to FutureGrasp, an organization working with the U.N.