Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Blackbird.AI, an AI-powered platform designed to combat disinformation, today announced that it closed a $10 million series A funding round led by Dorilton Ventures with participation from NetX, Generation Ventures, Trousdale Ventures, StartFast Ventures, and individual angel investors. The proceeds, which bring the company’s total raised to $11.87 million, will be used to support ramp-ups in hiring and product lines and launch new features and capabilities for corporate and national security customers, according to cofounder and CEO Wasim Khaled.
The cost of disinformation and digital manipulation threats to organizations and governments is estimated to be $78 billion annually, the University of Baltimore and Cheq Cybersecurity found in a report. The same study identified more than 70 countries that are believed to have used online platforms to spread disinformation in 2020, an increase of 150% from 2017.
Blackbird was founded by computer scientists Khaled and Naushad UzZaman, two friends who share the belief that disinformation is one of the greatest existential threats of our time. They launched San Francisco, California-based Blackbird in 2014 with the goal of developing a platform that enables companies to respond to disinformation campaigns by surfacing insights from real-time communications data.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
“We understood early on that social media platforms were not going to solve these problems and that as people were becoming increasingly reliant on social media for information, disinformation in the digital age was advancing as a threat in the background to democracy, societal cohesion, and enterprise organizations — directly through these very platforms,” Khaled told VentureBeat via email. “We made it our mission to build technologies to address this new class of threat that acts as a cyberattack on human perception.”
Blackbird tracks and analyzes what it describes as “media risks” emerging on social networks and other online platforms. Using AI, the system fuses a combination of signals, including narrative, network, cohort, manipulation, and deception, to profile potentially harmful information campaigns.
The narrative signal includes dialogs that follow a common theme, such as topics that have the potential to harm. The network signal measures the relationships between users and the concepts that they share in conversation. Meanwhile, the cohort signal canvasses the affiliations and shared beliefs of various online communities. The manipulation signal includes “synthetically forced” dialogue or propaganda, while the deception signal covers the deliberate spread of known disinformation, like hoaxes and conspiracies.
Blackbird tries to spot influencers and their interactions within communities as well as how they influence the voices of those participating, for example. Beyond this, the platform looks for shared value systems dominating the chats and evidence of propaganda, synthetic amplification, and bot-driven networks, trolls, and spammers.
For instance, last February, President Trump held a rally in Charleston, South Carolina, where he claimed concerns around the pandemic were an attempt by Democrats to discredit him, calling it “their new hoax.” Blackbird detected a coordinated campaign dubbed “Dem Panic” that appeared to launch during Trump’s speech: The platform also pinpointed hashtag subcategories with particularly high levels of manipulation, including #QAnon, #MAGA, and #Pelosi.
“Blackbird’s system provides insight into how a particular narrative (e.g., mRNA vaccine mutates human DNA) is spreading through user networks, along with the affiliation of those users (e.g., a mixture of anti-vax and anti-big-pharma accounts), whether manipulation tactics are being employed, and whether disinformation is being weaponized,” Khaled explained. “By deconstructing what is happening down to the very mechanism, the situational assessment then becomes actionable and leads to courses of action that can directly impact the business decision cycle.”
AI isn’t perfect. As evidenced by competitions like the Fake News Challenge and Facebook’s Hateful Memes Challenge, machine learning algorithms still struggle to gain a holistic understanding of words in context. Compounding the challenge is the potential for bias to creep into the algorithms. For example, some researchers claim that Perspective, an AI-powered anti-cyberbullying and anti-disinformation API run by Alphabet-backed organization Jigsaw, doesn’t moderate hate and toxic speech equally across different groups of people.
Revealingly, Facebook recently admitted that it hasn’t been able to train a model to find new instances of a specific category of disinformation: misleading news about COVID-19. The company is instead relying on its 60 partner fact-checking organizations to flag misleading headlines, descriptions, and images in posts. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call in May.
On the other hand, groups like MIT’s Lincoln Laboratory say they’ve had success in creating systems to automatically detect disinformation narratives — as well as people spreading the narratives within social media networks. Several years ago, researchers at the University of Washington’s Paul G. Allen School of Computer Science and Engineering and the Allen Institute for Artificial Intelligence developed Grover, an algorithm they said was able to pick out 92% of AI-written disinformation samples on a test set.
Amid an escalating disinformation defense and offense arms race, spending on threat intelligence is expected to grow 17% year-over-year from 2018 to 2024, according to Gartner. As something of a case in point, Blackbird — which has Fortune 500, Global 2000, and government customers — today announced a partnership with PR firm Weber Shandwick to help companies understand disinformation risks that can impact their businesses.
“Governments, corporations, and individuals can’t compete with the speed and scale of falsehoods and propaganda leaving sound decision-making vulnerable,” Khaled said. “Business intelligence solutions for the disinformation age require an evolved reimagining of conventional metrics in order to match the wide-ranging manipulation techniques utilized by a new generation of online threat actors that can cause massive financial and reputational damage. Blackbird’s technology can detect previously unseen manipulation within information networks, identify harmful narratives as they form, and flag the communities and actors that are driving them.”
Blackbird, which says the past 18 months have been the highest growth period in the company’s history in terms of revenue and customer demand, plans to triple the size of its team by the end of 2021. That’s despite competition from Logically, Fabula AI, New Knowledge, and other AI-powered startups that claim to detect disinformation with high accuracy.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.