We’ve all seen the stories and allegations of Russian bots manipulating the 2016 U.S. presidential election and, most recently, hijacking the FCC debate on net neutrality. Yet far from such high stakes arenas, there’s good reason to believe these automated pests are also contaminating data used by firms and governments to understand who we (the humans) are, as well as what we like and need with regard to a broad range of things.
Let me explain.
Social bots — which is what we’re talking about here; “bot” is a catch-all term for many different types of AI — can be a nuisance for social media platforms. A recent report estimated as many as 48 million Twitter accounts are actually bots, and they are responsible for as many as 1 in 4 tweets. Depressingly for Taylor Swift fans, a study in 2015 revealed that 67 percent of her followers were bots, and a new study from the University of Cambridge revealed that celebrities with more than 10 million followers behave in bot-like ways themselves. Indeed, everywhere you turn on social media, you are likely to be confronted by automated accounts. Many of them are highly sophisticated when it comes to impersonating human interactions using natural language, and they can even replicate real-life human networks.
So why does this matter? The answer to this is really twofold. The first is well-reported in the context of politics. These bots are deceptive and specifically designed to present as real people. This means they have regular names, hobbies, ages, and affiliations. They are relatable, and as such, they can influence real users. A user can also rent bots. This is not limited to governments — there are also big businesses that use bots to create hype. These users deploy bots with the knowledge that we humans are susceptible to bandwagons. Consequently, bots can create or mask real public sentiment, which means whoever programs and operates them can wield a lot of power.
The second problem is rather more subtle — bots can badly distort the social data that is used to make predictions and assumptions about human behavior. In other words, they make social media less reflective of “real life” and real people. This is significant for companies participating in social listening, data mining, or sentiment analysis. Researchers at Networked Insight found that nearly 10 percent of the social media posts that brands analyze to understand their customers’ behavior do not come from real users. It is significant for us because, where this analysis fuels “nudge” techniques and causes brands to shepherd us toward particular options (which happens even when we aren’t conscious of it), this is based on “insights” muddied by artificial voices.
Internet trends are often scaled up and relayed as fact by those who seek to analyze (and capitalize on) our every online movement. Where sentiment is warped by bots, this could cause brands and governments to mistakenly lead us away from what we actually want or need, stifling the will of the public. And there’s an additional harm here — if an individual and/or societal group decides the way that they are categorized is contrary to their preferences, there’s a good chance they’ll make efforts to modify their behavior to fool the analysis.
The social media giants are not standing by idly as this issue grows. They are hard at work bot-busting, and at the same time data users are trying to “clean” their bounty as best they can. Meanwhile, bot-makers are adapting and evolving the qualities that make their AI undetectable. Germany has plans to introduce a compulsory labeling system for posts from automated accounts, but given that many bot users are rogue anyway, such rules will likely be flouted. Consequently, citizens, small businesses, and members of civil society must be aware of bots’ ability to both steer and infect the “truths” of the masses, which thus cannot be taken at face value. People also need to proceed with appropriate caution.
This story originally appeared on Medium. Copyright 2017.
Fiona J. McEvoy is a tech ethics researcher and founder of YouTheData.com.