Democratic societies the world over have come under attack in this digital era — and in ways many probably never thought possible in their lifetime.
The 2016 U.S. presidential election was undermined by a two-pronged approach involving the Russian military and Facebook posts from the Internet Research Agency (IRA). Other recent examples have included ransomware attacks that target municipal governments — like the kind Georgia courts are now experiencing.
While democratic institutions are threatened by misuses of artificial intelligence like deepfakes, AI may also be one of our best tools in the fight to preserve our freedoms. This is our fourth in an annual installment of stories about AI-powered solutions or machine learning can better democracy. In the past, we’ve highlighted solutions ranging from Alexa skills for non-emergency city services to voter engagement bots and citizen service assistants.
Bot disclosure laws
California’s bot disclosure law went into effect on Monday, requiring any automated bot to identify itself as such when attempting to sell products or influence voters.
Getting the bots out of politics is a problem. More than 50,000 automated Twitter bots were active online during the 2016 U.S. presidential election, and robots carried out an estimated 1 in 5 tweets in weeks leading up to the 2018 midterm elections.
In response to such activity, Twitter is now attempting to label bots. The company has also carried out purges of its platform, booting off bots made to artificially inflate support for pro-Saudi and pro-Trump causes. Twitter also rejected thousands of anti-Mueller Report bots following the release of the Mueller Report this spring.
But additional measures to cut down on bots are still very much needed. Recently, a false rumor about Kamala Harris — spread in part by Twitter bots — was shared by both President Trump and Donald Trump Jr.
This is in line with Russia’s interference ahead of the 2016 presidential election — online efforts designed to sow discord by promoting divisive or false content around topics like immigration and racial equity campaigns like Black Lives Matter, according to the Mueller Report.
Whether bot makers will actually comply with the law is in question, but it seems like a step in the right direction and an experiment others can follow or learn from.
In an interview with VentureBeat earlier this year, Microsoft CTO Kevin Scott said that any informed citizen in the 21st century must have some understanding of artificial intelligence in order to participate in debates, because “You don’t want to be someone to whom AI is sort of this thing that happens to you.”
If you believe any part of assertion, or recent call for education initiatives by EU AI experts then public education initiatives to teach more people about AI may in fact be an act that strengthens democracy. The Finnish government for example committed to educating 1% of its population on the basics of AI.
Bot verification tool
Beyond the law requiring companies to inform you when you’re speaking with an automated bot, verification tools like Botcheck.me can let you know if you’re interacting with a bot or not.
This seems like a helpful tool for anyone who has ever received a reply on Twitter and wondered if they were dealing with an actual human.
Neural networks for bot detection have also shown promising results. Researchers from University of Southern California and the Indian Institute of Technology trained an LSTM model with 3,000 Twitter bot examples to achieve more than 90% bot detection accuracy.
Earlier this week, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released a draft report calling for the creation of new tools to increase public trust in artificial intelligence.
Crowd counting computer vision
When it comes to massive crowds attending protests or historic events, it can be tough to tell just how many people have shown up. To solve this problem, computer vision can automate such counting and ensure those who stand up are in fact counted.
Hong Kong University and Texas State University researchers recently used one of these systems to count crowds at Hong Kong protests that attracted hundreds of thousands of people.
According to the New York Times, crowd estimates by organizers and authorities can vary widely and be less than reliable.
No real-time facial recognition
Some AI use cases are at odds with democratic freedoms, however. Chief among those is governmental use of real-time facial recognition to identify dissidents at protests.
As AI experts unanimously agreed in Congressional testimony in May, facial recognition software can dampen activity at protests and limit the Constitutional freedom to assemble.
“It is fundamentally American to protest and frankly un-American to chill that kind of protest,” University of the District of Columbia law professor Andrew Ferguson said.
Beyond protests, real-time facial recognition software in public places can, as recent facial recognition ban legislation put it, become a form of tracking akin to asking people to walk around with their ID exposed at all times.
Police in Chicago and Detroit have obtained surveillance technology of this kind, according to recent Georgetown University Center on Privacy and Technology analysis.
You knew AI made to detect deepfakes — media distorted to manipulate public perception by imitating an individual’s voice and face — would make an appearance on this list.
Recent deepfake detection systems like the Allen Institute’s Grover have achieved promising results. But in testimony before Congress last month, experts emphasized that generative adversarial networks are being created to imitate and generate media at a much faster rate than deepfake detection advances can currently handle.
In other developments, Virginia lawmakers recently moved to include deepfakes in the state’s revenge porn law.
One simple lesson can be taken from the infamous hacks on the DNC and Hilary Clinton presidential campaign chair John Podesta: Guard your emails. Ransomware attacks, such as the kind that forced the municipal government in Riviera Beach, Florida to pay $600,000, were also the result of email phishing attacks.
Online security solutions — like Microsoft’s M365 for political campaigns and election verification tool in May — are being developed specifically to meet such needs.
And with attacks on both political parties and municipal governments apparently on the rise, additional tools to protect their information and privacy are likely on the way.