Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Today Microsoft president Brad Smith called for federal regulation of facial recognition software.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up — and to act,” Smith wrote in a blog post.

Recent events explain why Smith is speaking out now.

Last month, while the majority of U.S. citizens was outraged about the idea of separating families who unlawfully entered the United States, Microsoft was criticized by the public and hundreds of its own employees for its contract with Immigration and Customs Enforcement (ICE). Smith says Microsoft did not share facial recognition software with ICE, despite a January blog post that suggested that was a possibility.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

And in May, we learned that Amazon Web Services is selling its Rekognition facial recognition service to law enforcement in cities like Orlando, a development opposed by groups like the American Civil Liberties Union (ACLU) and other organizations. A New York Times roundup of facial recognition developments in China this week has also encouraged discussion of the topic.

Smith offered few constructive suggestions about how this regulation should be shaped, but here are a few things that should be part of any potential legislation to regulate the use of facial recognition software in the United States.

Accuracy standards

This week, Brian Brackeen, CEO of facial recognition software company Kairos, shared his thoughts on the use of facial recognition software by tech companies and government entities with the Congressional Black Caucus.

Just as there are standards for food or drugs made available for public consumption, Brackeen argues standards should be put in place for the accuracy of facial recognition software.

“One of the things we talked about is an audit ability, so essentially being able to say, ‘OK, any tool needs to work evenly across American society,'” he said in an interview with VentureBeat. “We have to have AI tools that are not going to false-positive on different genders or races more than others, so let’s create some kind of margin of error and binding standards for the government.”

Brackeen testified alongside Joy Buolamwini, an MIT researcher whose work earlier this year found facial recognition software from Microsoft, Face++, and others to be deficient in their ability to recognize women of color. Microsoft has since made some improvements to its Face API.

Brackeen wrote in an op-ed last month that facial recognition software is not yet ready for use by law enforcement agencies, and earlier this year he vowed to open-source datasets that would help other systems better recognize people of color.

The Electronic Frontier Foundation doesn’t believe police should be permitted to use facial recognition software at all, but if it must be used, it argues standards are necessary.

“In the absence of a ban, we think there should be very stringent regulation,” senior staff attorney Adam Schwartz told VentureBeat in a phone interview. “First and foremost, police should be getting a warrant from a judge based on probable cause of serious crimes before using [facial recognition], and we think police shouldn’t be using it unless the equipment they’re using is certified to have an error rate lower than 5 percent.”

Cybersecurity standards

Standards should not just include minimally acceptable accuracy rates. Kairos is attacked by Russian or Chinese bots a few hundred times a day, Brackeen said, and standards should be put in place for companies deploying facial recognition software in order to avoid misuse by foreign adversaries.

Lawmakers should also consider, Brackeen said, whether facial recognition databases should be shared with startups with ties to foreign governments like Face++ in China. You can take that with a grain of salt, I suppose, since they’re competitors, but that’s a practice he says that “starts to be a real clear and present danger to Americans in their data and sovereignty.”

No hunting political dissidents or protestors

Deployment of facial recognition software at protests could dampen free expression and the right to lawfully protest. It’s a short jump from here to targeting specific communities, as we saw with Geofeedia granting social media data access to law enforcement agencies in the past.

Likewise, facial recognition teamed with predictive policing methods that assert they have found causality or burden people with guilt by association seems to have a clear potential for misuse and oppression, especially if predictive policing relies on historically biased data.

Disclosure when your picture is stored in a facial recognition database

If your picture is taken by a government agency for use in a facial recognition software database, you should be told beforehand. Tech companies and the FBI may have massive datasets, but the largest datasets for facial recognition software in the U.S. today are likely photos taken for people’s driver’s licenses by their state Department of Motor Vehicles. People should be notified even if you have to take a picture to get your license.

“From that information if they want to talk to their member of Congress or mayor or whatever and participate in the process they can, but if they don’t know that’s happening, they can’t exercise their franchise,” he said.

Since it can be used for anything from better experiences in autonomous cars to payments to oppression, an all-encompassing view of what federal regulation of facial recognition software should include is tough to wrap your arms around.

The dystopian scenarios are really easy to imagine, particularly if you’re familiar with historic examples of overreach by governments or the use of tools by those in power to oppress or maintain power structures.

That’s part of the reason why Brad Smith called for federal regulation today and why Dr. Safiya Noble, author of Algorithms of Oppression, goes so far as to call AI one of the biggest human rights issues of the 21st century.

But what about missing kids?

Earlier this year, in trials carried out by the Delhi Police Department over the span of four days, nearly 3,000 missing children in India were located using facial recognition software. Consider that same example with vulnerable populations like elderly people with dementia, or people caught up in sex trafficking or modern day slavery.

The use cases at play with facial recognition software are so broad that it can be tough to understand which should be regulated. It’s a little easier to understand the kinds that should be off limits to protect personal freedoms and avoid the dystopia so many afraid of intelligent machines seem to believe is unavoidable.

As Smith points out, tech companies bear responsibility, but the decision for how facial recognition software should not be passed on to tech companies. Regulation of rapidly evolving technology belongs in the hands of elected officials. That’s true, and lawmakers and candidates running for election this year have questions to answer about the regulation they think should be enacted. Ultimately, it’s up to voters to decide who they want to represent them in Washington and the kind of facial recognition software uses they find acceptable.

For AI coverage, send news tips to Kyle Wiggers and Khari Johnson — and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson
AI Staff Writer

P.S. Enjoy this video of a panel discussion earlier this week hosted by Politico to discuss the government’s role in artificial intelligence.

From VB

Microsoft 365 launches live events with facial recognition and speech-to-text transcripts

Microsoft 365 can now host live events, and the new service comes equipped with AI-powered features such as facial recognition of attendees and autonomous speech-to-text conversion so participants can search video transcripts. Once an event is over, employees can skim video not just using timestamps or specific words, but by scanning through the faces of […]

Read the full story

Einride’s T-log is an autonomous truck, but only for logs

Swedish company Einride will introduce the T-log, an autonomous logging truck, by 2020.

Read the full story

Facebook open-sources AI that navigates New York City streets with 360-degree images

Facebook AI researchers have created a pair of AI systems that navigate the streets of New York City using only 360-degree images, natural language, and a map with local landmarks like banks and restaurants for guidance.

Read the full story

Salesforce Einstein bots for businesses now generally available

Salesforce is making its automated Einstein bots generally available for conversations with customers on websites and apps today. Einstein bots and a series of conversational AI services were first introduced for a pilot program last year at Dreamforce. Process automation tool Lightning Flow, which also debuted at Dreamforce last year, is also being made generally […]

Read the full story

Apple consolidates machine learning and Siri teams 

Apple is consolidating its Core ML and Siri teams under a new artificial intelligence and machine learning division headed by John Giannandrea, a former Google executive who joined the company in April.

Read the full story

Researchers develop AI that predicts developmental disabilities in infants

Researchers at the University of Southern California and the Universidad Carlos III de Madrid developed a system that uses motion sensor data and machine learning algorithms to predict developmental disabilities in infants.

Read the full story

Smart speaker adoption expected to grow 6 times by 2022, with Apple trailing

A new report projects that the smart speaker market will continue its torrid growth, with Google and Amazon battling for dominance and Apple stuck in a distant third place. According to Canalys, the number of smart speakers in use will come close to 100 million by the end of 2018, up from under 50 million […]

Read the full story

Beyond VB

How artificial intelligence will reshape the global order

The Coming Competition Between Digital Authoritarianism and Liberal Democracy (via Foreign Affairs)

Read the full story

Japanese firm developing hands-free ‘flying umbrella’

An information technology company is developing a drone-based “flying umbrella” that users do not have to hold. (via The Straits Times)

Read the full story

The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work

Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors (via The Guardian)

Read the full story

How artificial intelligence could prevent natural disasters

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. (via Wired)

Read the full story

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.