Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
In 2014, the term “fake news” hadn’t yet become part of the American lexicon and the 2016 U.S. presidential race was only beginning to make headlines. But in California, a man named Jestin Coler was hard at work creating one of the most divisive media trends in modern history.
Dubbed the godfather of the fake news industry, Coler’s efforts began with publishing fabricated stories — including an article about Colorado food stamp recipients using welfare benefits to buy marijuana — that garnered enough traffic to generate tens of thousands of dollars a month in ad revenue. The idea quickly caught on. Competing sites sprang up around the world as other publishers raced to create masterpieces of outrageous, conspiratorial, and highly partisan fake news ahead of the U.S. presidential election.
Since then, the fake news phenomenon has created the means for people (including public leaders) to dismiss reports of their wrongdoings and infuse otherwise legitimate political debates with falsehoods. Even amateur web users can doctor images and videos to create “evidence” of events that never happened.
There’s no easy answer to the problem. But artificial intelligence can help.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Sixty-two percent of Americans look to social media for information about what’s happening in the world. How we engage with the articles and videos we find on these platforms influences which stories and posts we’ll see in the future. If we like, comment on, or share more conservative news items than we do liberal ones, for instance, social algorithms will show us similar content the next time we sign on. Our online contacts also factor into this equation. Having a disproportionate number of liberal-leaning friends or followers skews our feeds as well.
Blatantly false news isn’t the only thing that should concern us. Headlines and stories that frame accurate information in misleading ways also distort our perceptions. As Kim LaCaria, content manager for Snopes, told Quartz: “There’s information and then there’s how it’s presented, and those two aren’t always the same.”
Colombia Journalism Review has advised journalists to look at creation dates and source materials to verify videos, along with clues from content creators’ online backgrounds. Video analysis programs and other verification tools also help. Nearly 60 percent of people repost articles without reading past the headline, so the odds of readers vetting every article seem slim. Even if we all had sufficient time and inclination to become digital detectives, the sheer amount of online content means that making a dent is unfeasible. Millions of online interactions occur each minute, and no human can keep up with them all. An artificial intelligence system, on the other hand, might be able to help stem the fake news tide.
Using AI to solve the fake news problem
An AI system trained to analyze text, videos, images, and audio could work around the clock at rates that far exceed even the most efficient human. Computer science researchers at one university are developing a machine-learning approach to fake news detection. The program will analyze the content of an article and then score it based on how likely it is to be fake news. It can also generate a breakdown of why the score was assigned so readers can understand why the AI system flagged something as fake news.
“Artificial intelligence (AI) can have all the same information as people, but it can address the volume of news and decipher validity without getting tired,” said Stephan Woerner, a student working on the project. “People tend to get political or emotional, but AI does not. It just addresses the problem it’s trained to combat.”
Ironically, the more fake news that’s produced, the better an AI vetting system may become. Machine learning platforms self-improve based on data inputs, so a glut of false articles and videos can enable them to hone their fake news detection abilities.
Other AI systems being developed to identify fake news use Natural Language Processing (NLP) to conduct a complex series of analyses on news items. NLP systems process and organize even unstructured information, pulling insights from vast data sets — an ability that would clearly be useful in scanning and categorizing large volumes of articles created on the web. Algorithms written specifically to identify fake news might compare the ways in which different sites cover certain news events and how a lesser-known site’s coverage stacks up against mainstream outlets, as well as dissecting elements like context and location.
Some developers are working to create programs that parse the contents of articles from different websites and compare their coverage of events against that of others to look for potentially misleading items. Again, the more fake news the system takes in and analyzes, the more adept it becomes at identifying suspicious claims and publication details.
A human-machine alliance
Even if AI helps curb the fake news problem, humans still bear responsibility for creating and sharing fake news. Platforms, including social media, enable users to flag posts as fake news. They use algorithms to identify fake content and keep it from spreading. However, the output is only as accurate as people’s input. If enough users flag authentic content as fake, then quality publishers are at risk of being wrongly labeled.
Darren Campo, an adjunct professor at New York University’s Stern School of Businesses, told Fox News that humans can also manipulate AI systems by using careful language in fake news production. “Fake news protects itself by embedding a ‘fact’ in terms than can be defended,” Campo said. While AI systems may be effective at identifying that a fact is incorrect, it may not be as effective at identifying the context around that fact.
Developers would also need to account for limitations in their programs. For instance, a vetting algorithm might draw on existing content to verify a story’s accuracy. But when a reputable outlet publishes breaking news, it may do so without much context, which may in turn impact the AI system’s determination. Proper human input can help safeguard against this and avoid further exacerbating the fake news problem.
We’ll also need to overcome our own biases. Reading articles that feed into our confirmation bias can make us feel good, but we’ll have to exercise skepticism about what we read if we’re to defend truth and factual reporting in our society. AI will likely play a critical role in combating fake news, but progress in this area depends on us, as readers, becoming more conscientious about what we share and how we engage with one another online.
Additional article contributors: Mehdi Ghafourifar and Brian Walker.
Alston Ghafourifar is the CEO and cofounder of Entefy, an AI-communication technology company, introducing the first universal communicator.
The article originally appeared at Entefy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.