Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Facial recognition technology is a controversial topic. Scores of investors and consumer advocacy groups rallied against Amazon for providing its face-detecting artificial intelligence (AI) to local law enforcement. In committee hearings, representatives in the House of Representatives took the FBI to task for using a facial ID system with an error rate of nearly 15 percent. And Facebook has come under fire for applying facial recognition to photos without users’ permission.
Regulatory ambiguity concerning the deployment of facial recognition tech has companies like Microsoft inviting the government to weigh in. In a blog post today, Microsoft president Brad Smith called on lawmakers to investigate face-detecting algorithms and craft policies guiding their usage.
Smith and Harry Shum, Microsoft’s AI chief, published a treatise earlier this year predicting that advances in AI would require new laws. But Smith’s post today marks the first time the Redmond company has explicitly advocated for the regulation of facial recognition systems and strikes a different tone than that of competitors like Amazon, which said in June that it was incumbent on the private sector to “act responsibly” in employing AI technologies.
“Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology,” Smith wrote. “In a democratic republic, there is no substitute for decision-making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms … We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology.”
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Facial recognition is becoming “deeply infused” in our society, Smith points out — which isn’t necessarily a bad thing. Police in India used it to track down more than 3,000 missing children in four days, and local authorities sourced a facial recognition database to identify the suspect involved in last month’s deadly Capital Gazette shooting. But that doesn’t mean there isn’t potential for abuse.
“Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies — like Minority Report, Enemy of the State, and even 1984 — but now it’s on the verge of becoming possible,” Smith said.
It’s also an imperfect technology. Some facial ID systems perform measurably worse on African-American faces than Caucasian faces, Smith notes, and others have a tougher time identifying women than men. (In February, a paper coauthored by Microsoft researcher Timnit Gebru showed error rates of as high as 35 percent for systems classifying darker-skinned women.)
“Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures,” Smith said. “Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”
Smith didn’t call for specific laws or ethical principles, but posed a series of questions for regulators to consider, including “Should law enforcement use of facial recognition be subject to human oversight and controls?” and “[S]hould we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?”
As for how regulations might come to pass, Smith believes that legislators should “use the right mechanisms” to gather expert advice to inform their decision-making. That includes the appointment of bipartisan expert commissions — specifically those that build on work done by academics and the public and private sectors.
“The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch,” he wrote.
Smith was careful to note that congressional regulation, if and when it arrives, won’t mean that firms can abdicate their own responsibilities. He called on tech companies to investigate ways to reduce the risk of bias in facial recognition technology, to take a “principled” and “transparent” approach to developing face-detecting systems, to move more slowly and deliberately in the deployment of facial recognition tech, and to participate in a “full” and “responsible” manner in public policy debates regarding this technology.
Microsoft, for its part, has created an internal advisory panel called the Aether Committee to look at its use of artificial intelligence and has published a set of ethical principles for the development of its AI technologies. It also says it has turned down client request to deploy facial recognition technology “where we’ve concluded there are greater human rights risks.” (Microsoft declined to provide details.)
“All tools can be used for good or ill,” Smith wrote. “The more powerful the tool, the greater the benefit or damage it can cause … Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression.”
Smith’s blog post comes at a time when Microsoft, Google, Salesforce, and other technology companies face intense criticism for supplying tools and expertise to controversial programs. Microsoft, bowing to public pressure, canceled a contract with the U.S. Immigration and Customs Enforcement (ICE) in June. And Google employees protested the company’s involvement in Project Maven, a Defense Department program that sought to infuse drone footage with an object recognition system.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.