When it comes to facial recognition software, people trust tech companies and advertisers less than they do police, but attitudes vary based on age, race, gender, and political party, according to a Pew Research survey released today.

The survey is the first from Pew Research to assess the attitudes of U.S. adults toward facial recognition software, Pew Research Data Labs director Aaron Smith told VentureBeat in a phone interview.

Overall, 56% of respondents trust law enforcement somewhat or a great deal compared to 36% for tech companies and 18% for advertisers, according to a survey of more than 4,200 U.S. adults conducted in early June.

“It doesn’t necessarily run counter to any of the previous research we’ve done around people’s views of the police more broadly. But I think, given the amount of coverage of various instances of law enforcement using facial recognition technology in somewhat problematic ways over the last month or so, in a broad sense that general finding is clearly the one from the survey results that is quite noteworthy,” Smith said.

The survey also shows distinctions between attitudes when factoring in age, gender, race, and political affiliation.

By race, 61% of white respondents, 56% of Hispanic respondents, and 43% of African-American respondents trust police. In contrast, black respondents trust tech and advertising companies more than white respondents.

By political party, 65% of Republicans or Republican-leaning respondents trust police, while 51% of Democrats or Democratic-leaning respondents feel the same. This is reflected by state legislatures in a Pew survey on public institutions and trust released in July that found a 20% gap (69% to 89%) in public confidence in police.

This is also reflected in the fact that multiple state legislatures that lean Democratic in places like Michigan, New Jersey, and California are currently considering legislation to place a moratorium on facial recognition usage.

Men trust facial recognition more than women, and older respondents were generally more likely to trust that police would use facial recognition software in a responsible way than young respondents.

Among use cases deemed acceptable in a multi-choice question, the assessment of security threats in public spaces by police ranked highest (59%) while advertisers using facial recognition to measure user reaction ranked highest among unacceptable use cases (54%).

Approximately 13% of respondents said they had never heard of facial recognition software. People with college diplomas were more likely to know what facial recognition software is than those with only high school diplomas.

The survey made no attempt to ask questions about attitudes toward particular systems, like offerings from Microsoft or Amazon. The report is part of a larger Pew Research Center initiative to explore digital privacy attitudes in the United States.

Recent months have brought horror stories of facial recognition misuse, from the NYPD using a picture of Woody Harrelson to arrest a suspect with similar features to the technology’s use in the detainment of more than 1 million Uighur Muslims by authorities in China.

Lawmakers in the U.S. Congress and EU Commission are currently considering regulation of facial recognition software, while cities like San Francisco; Oakland; and Somerville, Massachusetts passed bans on facial recognition use by police or city departments.

The FBI also came under fire from Congress and the Government Accountability Office in June for failure to implement a number of measures to protect people’s privacy and assess the accuracy of its facial recognition system that draws from DMV photo databases in more than 20 states.

Facial recognition bans were passed in May in San Francisco; in June in Somerville, Massachusetts; and in July in Oakland. In each city, residents and activists urged their local lawmakers to adopt a ban due to fear of privacy intrusions, misuse by local or federal authorities, overpolicing of communities of color, and potential misidentification.

Audits performed in 2018 and earlier this year found that popular facial recognition systems often misidentify people of color, and women with dark skin tones, in particular. Tests of Amazon’s Rekognition found that it misidentified members of Congress and the California state legislature as criminals and was significantly more likely to misidentify people of color.

In other recent facial recognition news, a controversial plan for facial recognition in London’s crowded King’s Cross transit station was abandoned this week, and Facebook spread facial recognition to all of its users while simultaneously opening the option for users to disable the feature.

A poll released Monday that measures facial recognition attitudes in the United Kingdom found that 55% of adults want restrictions placed on police use of facial recognition technology.

Today’s news comes shortly before the release of Google’s Nest Hub Max, a smart display with facial recognition, and possibly the Pixel 4, which is also expected to utilize facial recognition.