Here’s a word of warning to any would-be Orson Welles who hopes to use social media to create War of the Worlds-like panic.
A social media lie detector is in the works.
A multi-national team of scientists has undertaken a three-year project to determine, in real time, which social media posts are true. The project is funded by the European Commission, and five European universities and four companies, three in Europe and one in Kenya, are partners in the research consortium.
“It’s an extremely daunting task,” Current Analysis social media analyst Brad Shimmin told VentureBeat. (Shimmin is not involved in the research project.) “But [if it worked,] it would fundamentally change the relationship [many people] have with their social media accounts.”
In other words, he said, “if everything can be verified, there’s no use in hiding behind our [online] personas.”
It would also change the nature of reputation management, customer relationship management (CRM), online publicity, and advertising. It might even curtail the ability to spread fake conspiracy theories.
The project’s shorthand name is Pheme, for the Greek goddess of rumor and gossip. The more formal subtitle: “Computing Veracity Across Media, Languages and Social Networks.”
The impetus, according to lead researcher Dr. Kalina Bontcheva of participating institution the University of Sheffield, was the 2011 London riots.
In that event, also known as “the BlackBerry riots,” fake tweets — apparently politically motivated — broadcast the false info that animals had been released from the London Zoo and that some British landmarks were burning. Thousands of people rioted throughout England, although it’s not clear whether those tweets played a key role or not.
“There was a suggestion after the 2011 riots that social networks should have been shut down, to prevent the rioters using them to organize,” she told the BBC. “But social networks also provide useful information,” Bontcheva added. “The problem is that it all happens so fast and we can’t quickly sort truth from lies.
Here’s the overview for real-time sorting of the wheat from the chaff, according to the project’s website:
“…first, the information inherent in a document itself [is analyzed] — that is lexical, semantic and syntactic information, [which includes how credible the source is]. This is then cross-referenced with data sources that are assessed as particularly trustworthy, for example in the case of medical information, PubMed, the biggest online database in the world for original medical publications. We will also harness knowledge from Linked Open Data, through the expertise of Ontotext and their highly scalable OWLIM platform. Finally, the diffusion of a piece of information is analysed – who receives what information and how, and when is it transmitted to whom?”
Pheme will sort posts into one of four categories, the researchers write:
- speculation – such as whether interest rates might rise;
- controversy – as over the MMR vaccine in the UK;
- misinformation, where something untrue is spread unwittingly;
- and disinformation, where it’s done with malicious intent.
A visual analytics dashboard will show rumor diffusion patterns, message types (e.g., whether confirming or denying a rumor), geospatial projections of author distribution and sphere of influence. First results are expected in 18 months.
The project will focus on two domains: digital journalism, where the project results will be evaluated by the Swiss Broadcasting Corporation; and healthcare, by Kings College London. The healthcare effort will look at rumors related to new recreational drugs and the spread of their information to doctors.
The researchers say their “veracity intelligence algorithms” will be released as open source, and the University of Warwick will maintain a human-analyzed rumor dataset.
Via The Telegraph