Over the past few months, high-profile incidents in the United Kingdom — home to one of the most-surveilled societies in the world — have forced people to consider how facial recognition will be used in the country. The fact that Brexit is taking up most of the oxygen in the room hasn’t made the debate any easier, but in conversations with VentureBeat, three experts from different backgrounds — Ada Lovelace Institute director Carly Kind; U.K. surveillance camera commissioner Tony Porter; and University of Essex professor Daragh Murray, who studies police use of facial recognition — all agree that the U.K. needs to find a middle ground when it comes to facial recognition policy.
They further agree that years of Brexit debate have stifled necessary reform and that leaving the European Union could have long-term consequences as police and businesses continue experimenting with facial recognition in the U.K. They also worry that an inability to take appropriate action could lead to calls for a ban or overregulation of the technology, or to far more dystopian scenarios of omnipresent surveillance.
Global proliferation of facial recognition
As a symbol for the fear of technology trampling human rights, The Terminator‘s got serious competition. Facial recognition’s deeply personal and pervasive nature has already made it a major issue around the globe, heightened by advances in AI that make it now work in real time.
In democratic societies worldwide, facial recognition is challenging lawmakers to confront how AI will shape society and is redefining attitudes toward artificial intelligence. Its use in Hong Kong to identify protestors and in Western China to find and persecute Uighur Muslims has inflamed fears of dystopian levels of surveillance that you cannot avoid without hiding your face from the world.
Lately, a ban on masks in Hong Kong, coupled with the requirement that internet and smartphone users perform facial recognition scans, has kept China prominently in the news.
Cities like Shenzhen and Shanghai currently have more surveillance cameras per capita than anywhere else in the world, but analysis by Comparitech in August found that London has the highest number of surveillance cameras of any city outside of China. And estimates by Big Brother Watch suggest Britain has the second-most surveillance cameras per capita of any country in the world.
Use of AI-powered surveillance technology is on the rise globally, according to a recent Carnegie Endowment for International Peace report. And it’s likely to grow, with initiatives like Huawei’s Smart Cities program for 5G being exported to countries around the world, as well as facial recognition databases being considered in countries like France and India.
Facial recognition trials in a Swedish school and in a Berlin train station, as well as plans to create a national ID in France, have all made the news this year, but the size of the U.K. surveillance camera network makes it ground zero for the facial recognition debate.
But while live facial recognition deployments continue in the U.K., Britain has been busy with Brexit as it tries to avoid, delay, or carry out plans to leave the European Union. Still on the table is a no-deal Brexit, a scenario with cascading, unforeseen consequences.
Exactly what’s going to happen in the U.K. as a result of Brexit still seems unclear to even the most well-informed people, but to get an idea of the way Brexit has already shaped facial recognition debates — and how it will likely continue to do so — VentureBeat spoke to three individuals with distinct vantage points on the unfolding saga.
Carly Kind and Europe’s potential “third way”
Carly Kind is director of the Ada Lovelace Institute. Named for the creator of the first algorithm, it began about a year ago with support from the Alan Turing Institute; The Royal Society; the British Academy; and techUK, an organization that represents the tech industry in the U.K. The institute intentionally receives no funding from tech giants.
“In AI ethics conversations, if you follow the money trail you find Google and Facebook money in much of the ethics community, so we think it’s quite important to be independent from private sector funding,” Kind said.
As the first major initiative of the organization, which was built to marry research with policy, the Ada Lovelace Institute chose to focus on facial recognition technology in the U.K.
Last month, the institute released what’s thought to be the first survey to measure British attitudes toward facial recognition software. In the study, a majority of about 4,100 U.K. adults surveyed said they want the government to place restrictions on police use of the technology, but nearly 50% support its use by police if safeguards are in place.
Conversely, 50% support a voluntary moratorium on use by police, and 70% support a voluntary moratorium at schools.
A Pew Research poll also released in early September found that a narrow majority of U.S. adults (56%) trust police to use facial recognition, while far fewer trust tech companies (36%) or advertisers (18%) with that data.
In a Financial Times op-ed published a short time later, Kind called for a facial recognition moratorium to avoid “sleepwalking into widespread deployment of facial recognition” without stopping to take time to think about how the technology will affect legal and societal relationships and the kinds of protections that need to be put in place first.
“The other [reason] is I think we want to avoid reactionary bans of these new technologies born out of fear. Our survey about attitudes toward facial recognition shows that people do want this technology to be used in places where there’s a clear benefit in terms of public safety and where there are safeguards in place, so I don’t think the public wants bans on the technology any more than the private sector wants bans on the technology, and yet we’ve seen that has been an approach by some, like in San Francisco,” she said.
In the U.S., the San Francisco County Board of Supervisors passed a ban on facial recognition technology in May, followed by Somerville, Massachusetts and Oakland, California in June and July, respectively.
In the U.K., members of parliament called for a moratorium in July, but Kind is proposing instead a voluntary delay by tech companies, akin to the kind British insurance agencies put in place for genetic testing in the 1990s.
The U.K. has played a role in legitimizing surveillance technologies in the past, Kind said.
“We have quite an important legitimizing impact in other countries, and that’s why it’s very important to get it right here, I would argue. And that’s not necessarily to say the U.K. is explicitly trying to be a world leader in facial recognition — that’s not the point,” Kind said. “The point is that I think it’s almost as if the U.K. has become a testing ground. If it becomes widespread here, then other countries might follow suit. And if it does become widespread here, then we need to make sure it’s done properly the first time around.”
Any such regulation is likely to amend the Investigatory Powers Act of 2016 or Protections of Freedom Act of 2012, laws that define proper use by the state, or the Data Protection Act, which controls use by private vendors.
However, Kind laments that lawmakers have made little progress when it comes to putting legal frameworks in place, due to the overriding demands of Brexit and its potential ramifications.
“There’s not been really any legislative progress in the U.K. for the last three years on anything that isn’t Brexit,” she said. “I think there’s a strong likelihood Brexit has paralyzed the ability of the legislature to keep up with technology, which is already not the thing it’s best at doing.”
In addition to continuing talks to negotiate a voluntary moratorium, the Ada Lovelace Institute will push for overarching legislation to regulate not just public and private sector use, but other emerging surveillance technology, as well.
As the U.K. begins to prepare for an October 31 exit from the European Union or extension request, the European Commission is reportedly planning to introduce facial recognition regulation that could grant EU citizens rights over their facial recognition data and the right to know when facial recognition is being used.
The development essentially means a no-deal Brexit could end with the U.K. no longer following GDPR privacy law or future facial recognition regulation, and as a result the U.K. could inadvertently fall behind Europe in terms of privacy regulation, Kind said.
Lawmakers in the U.S. are also reportedly planning to introduce bipartisan facial recognition legislation, Vox reported in August.
Despite the difficulty of predicting fallout from Brexit, Kind still believes the U.K. can join the EU in defining a third way for AI. She says this can be accomplished by crafting an AI ecosystem and vision for the technology that’s different from the generally more corporate approach taken in the United States and the state-driven approach in China.
“I think there’s a strong argument for that being the EU’s unique value-add to the AI sector,” Kind said. “I think the challenge will be to see how — from an industry perspective and from a research perspective — there’s appetite there to pursue that. Certainly that’s what we’ll be advocating for.”
But Kind sees this effort as extending beyond political cooperation between the U.K. and its neighbors.
“I think there’s a really strong argument that for long-term, sustainable success of AI, we should take this approach, and I think the people that we need to convince are those building the technology and procuring the technology and selling it as well, and that means really bringing the private sector to the table.”
Tony Porter and police compliance with surveillance law
Tony Porter is the U.K.’s surveillance camera commissioner, a position created five years ago by parliament after the government recognized the ubiquitous nature of surveillance in the U.K.
The unique role requires him to wear many hats.
Porter, who sits on the Home Office’s biometrics advisory board, advises government ministers and local police and national law enforcement units on how to remain compliant with national legal guidelines, as well as advising courts and privacy bodies within the European Union.
But his main job is to ensure police and government agencies comply with a set of 12 principles laid out in the surveillance camera code of practice, such as the need for clear rules and policies for surveillance camera use, steps to “safeguard against unauthorized access and use,” and establishment of policies and procedures before surveillance camera systems are installed. The principles derive their power in part from the U.K.’s Protection of Freedoms Act 2012.
Non-compliance with the stated goals can result in footage being brought to court, where a judge is able to decide whether it can be used as evidence. This has serious consequences in cases, because the vast majority of homicide trials in the U.K. use CCTV footage as evidence, Porter said.
Porter was unable to verify claims that the U.K. has the second-largest number of surveillance cameras per capita in the world, but he pointed to a 2013 estimate that the U.K. has roughly 5 million surveillance cameras, a number that’s likely grown significantly with the proliferation of body cameras, drones, video doorbells, and cameras on vehicles.
“The U.K. doesn’t have the same sensitivities to that kind of crime prevention [that] certainly other European countries have. Many academics and social commentators have put [that] down to the fact that in the U.K., we’ve never been under the boot of a fascist dictatorship. The notion of surveillance to protect the community, as opposed to invading on its privacy, is more easily tolerated,” he said.
Following a recent ruling by South Wales courts that found police use of facial recognition to be legal, Porter expects more law enforcement agencies in the U.K. to reassess their position and potentially begin their own trials.
Porter expects the proliferation of facial recognition technology to make his job more difficult, as “more bespoke or unique incidents of its use” come into play, the technology’s legitimacy is challenged, and advisory demands require additional resources. But he cautions against hyperbole when outlining the issues, saying law enforcement agencies tend to be responsive when told they aren’t complying with the 12 principles.
“There are those people who step forward and scream from the rooftops that this is a chilling technology, [that] all is lost and the sky is falling, when actually you’ve got police services that are on their own volition saying ‘No, this isn’t right’ and [stopping]. And the question I’d ask is ‘Would that happen in China?’ or ‘Would that happen in an oppressive state?’ No, it wouldn’t,” he said. “What the public are calling for is a very clear regime of oversight in government so that they have assurance that you haven’t got private retail running off with watch lists and doing what they want with it.”
Porter is also advising lawmakers to toughen oversight and pass or reform law to incorporate surveillance technology in general, not just facial recognition, because AI is evolving to be able to read people’s lips or identify them based on the way they walk, technology with the potential of being equally invasive.
Brexit has become an obstacle for Home Office officials, as well as the wider government, but that doesn’t mean things have been at a total standstill.
“We’ve seen the requirement to have CCTV cameras in slaughterhouses passed last year, and whilst that’s important, and protecting pigs and animals is important, what I’ve been arguing for is the greater protection of humans, and I have called upon government to attend to this and ensure that the public is satisfied that the regulation is tight. We’re not there yet, but I’m confident that [if we] continue to support and advise ministers, we will move toward having a code of practice that can manage these issues effectively,” Porter said.
Citing recent facial recognition and surveillance moratoriums passed in Morocco and California, Porter called this an exciting time for policymakers in search of governance and discussion, but again warned against debate being driven by fear of dystopia.
“I suppose my single clarion call would be [to] let that be an intelligent, healthy debate, not drowned out by a shrill shriek of ‘This is an invasion of privacy,’ because properly managed, with proper oversight that reassures the public, then it could be argued that there could be some significant good that comes out of its use,” he said. “But it has to be controlled.”
Daragh Murray: Technology has to serve society
This summer, University of Essex professors Peter Fussey and Daragh Murray released what’s thought to be the first independent review of police live facial recognition use in the U.K.
The analysis, which looked at Metropolitan Police facial recognition trials in London that ended in February, found that systems deployed by police were inaccurate more than 80% of the time when identifying 43 suspects on the streets of London.
The analysis also found what Murray described as a “presumption to intervene.”
“That idea, I guess, the role of human intervention, or human input in the decision-making process, wasn’t really effective,” Murray said.
The other problem with the trial was the violation of the human rights of potentially thousands of other people subjected to biometric processing, Murray said.
Similar criticism has been lobbed at the FBI in the U.S. for using driver’s license databases from state DMVs to fuel its facial recognition search program.
Earlier this year, members of Congress criticized the FBI for years of ignoring Government Accountability Office demands to strengthen privacy protections and perform annual performance audits.
Murray thinks one potential outcome from Brexit could be that U.K. policy drifts closer to the corporate-driven approach in the United States and further from a more privacy-conscious EU approach.
Though Murray is primarily concerned with the human rights implications of facial recognition software, like Kind and Porter he believes in finding a middle ground between appropriate public use and the formation of a police state.
He supports Porter’s work but thinks a judge should also be involved in the process to act as a kind of “double lock.” A judge could act as an independent arbiter in the process to evaluate the need for facial recognition use and police rationale that a specific incident merits use of the technology.
Taking this sort of privacy-conscious approach could lead to different kinds of attitudes toward facial recognition than those currently seen in either the United States or China.
“I think we would agree that […] a third way or something like that would be appropriate, really, because technology has incredible potential to really transform society for the better. But it’s also subject to such potential abuse, even without any intention,” he said. “Moving away from the China and the Uighur examples, even with intentional authoritarian abuse, the increased use of AI in decision-making really significantly increases state surveillance capacity and analytical capacity and alters how society works.”
Murray reiterated the importance of keeping use of the technology in line with society’s best interests.
“Technology has to serve society. That has to be the bottom line. It shouldn’t be about technology for technology’s sake,” Murray said.
The distinct difference between facial recognition technology as it was known 10 years ago and what’s available today is that you can now do a lot more than search a database for a match. Assisted by recent advances in computer vision and existing networks of surveillance cameras, facial recognition can identify a person at a mall or protest in real time and then follow that individual home. Its potential for stifling political protest and the right to privacy is why the EU’s high-level AI expert group warned against its proliferation.
Just as Brexit is now being settled with no rule book, the story of how facial recognition technology will be reined in or let loose in the United Kingdom is being written as we speak. Brexit is an undeniably sloppy mess that after years of fighting and percolating may be in its last throes. But the U.K.’s developing policies on facial recognition will also be undeniably important, not just for the nation’s future but for their power to influence how other countries around the world use or safeguard against the technology.
But as Kind, Porter, and Murray assert and the Ada Lovelace Institute survey suggests, the U.K. must interrogate the gray area between overregulation and lack of oversight. That process may have been stifled by Brexit negotiations and could be impacted by their final result, but experts agree the U.K.’s path forward should lead to management, not banishment, of facial recognition technology.