The morning after last week’s presidential debate, social media blew up with opinions on how each candidate fared and riffs on – sniff, sniff – the more interesting and bizarre moments. And, as to be expected, online polls weighed in on which of the contenders won.

Being in “the cyber” business, I know that many of these reactions share a troubling trait: They’re fake.

On top of everything else that’s unique about the 2016 election, it has marked a surge in the use of bots – digital programs that post autonomously from spam accounts – to try to influence opinion.

Phony polls

Let’s start with the number of online spot polls after the presidential debate that purported to show a decisive victory for Donald Trump over Hillary Clinton. This despite the fact that most pundits thought Clinton had the better night, as did reputable polls such as a CNN/ORC survey that deemed Clinton the winner by 62 to 27 percent and one by Public Policy Polling that had Clinton defeating Trump 51 to 40 percent.

According to multiple media outlets, these online polls were bogus. “Trump supporters artificially manipulated the results of online polls to create a false narrative that the Republican nominee won,” the Daily Dot reported.

One culprit, according to the outlet: the message board 4chan, which has a sordid history of botting online competitions. According to the Daily Dot, “In 2009, users flooded the Time 100 poll to ensure that the site’s founder, Christopher Poole, made the cut. In 2012, the pranksters employed JavaScript to vote for North Korean leader Kim Jong-un in Time’s annual Person of the Year poll and followed suit the next year with Miley Cyrus and Edward Snowden.”

The fact is, it’s surprisingly easy to create programming scripts to manipulate online polls and, as a result, try to sway public opinion. It goes without saying that these antics are wrong, and it’s also irresponsible for journalists to report them as legitimate.

Hanky-Panky on Twitter

Another place where bots are having a field day this election season is Twitter. Both Trump and Clinton have millions of fake Twitter followers. According to Twitter Audit, 39 percent of Trump’s 11,012,445 followers as of August were not real. Clinton fared slightly better, with 37 percent of her 8,402,211 fabricated.

Now, just about everyone on Twitter has some fake followers. It’s a long-standing problem for the social media service. But the scale of this monkey business in the presidential campaign is unprecedented, and it shouldn’t be taken lightly. Political bots are being used to exaggerate a candidate’s popularity on Twitter and manipulate public conversation.

For example, the morning after the first debate, Trump tweeted, “The #1 trend on Twitter right now is #TrumpWon – thank you!” Given that nearly four in 10 Trump followers are fake, how do we know bots didn’t stack the deck by using programming scripts to automatically tweet and retweet the hashtag. The same question could be asked of Clinton’s #SheWon hashtag.

Bots, of course, are prevalent across the Internet. These pieces of software that crawl the web and perform automated tasks at a volume and speed beyond human capability carried out 74 billion interactions with websites in 2015. Bots accounted for 46 percent of traffic on the Internet last year, according to an annual analysis by my company of the sources and types of bot attacks.

Bots are often associated with cybercriminals stealing data, identities, and intellectual property, initiating denial-of-service attacks, or grabbing the best tickets for concerts or sporting events.

Most bots – almost 50 percent – actually are “good bots” that deliver helpful services such as search engine indexing, stock trade execution, and news and weather updates.

Political Bots Play Dirty Tricks

As is the case with bots generally, the ones running amok this election season are becoming increasingly sophisticated.

“Back in the early days of fake followers, the programmers who made the bots often just plucked pictures of people from Google, created a fake name, fake biography, and — voilà — you had a fake follower,” Vanity Fair reported. “But now, to subvert being found out, bots have become incredibly clever, even sometimes becoming indistinguishable from real people. They use semantic analysis to understand what people are tweeting about, and reply with answers that are mostly coherent.”

In April, Patrick Ruffini, a digital political consultant in Washington, DC, posted a Google Docs spreadsheet listing 500 pro-Trump accounts that had simultaneously tweeted a message urging voters to file FCC complaints over robocalls from the campaign of Trump rival Ted Cruz. Many of the same users had previously tweeted “17 Marketing Tips for B2B Websites,” Ruffini found.

A Worldwide Issue

The 2016 race isn’t the first time questions have been raised about the influence of bot activity in political campaigns.

Britain’s June 23 referendum on European membership had its share of poll hoaxes, and a report by researchers at Oxford University and Budapest’s Corvinus University revealed that bots played a “small but strategic role” in the social media discussion around the Brexit vote.

Of the nearly 314,000 accounts that tweeted one way or the other about the referendum two weeks before the vote, the researchers found, 15 percent were heavily or entirely automated.

“Political actors and governments worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues,” the report said. “Political bots tend to be developed and deployed in sensitive political moments when public opinion is polarized.”

If you have any doubts about the potential of bots and other digital shenanigans to rig an election, read this chilling account by Bloomberg of the role bogus Twitter accounts played in the election of Mexico president Enrique Peña Nieto in 2012.

While election cybersecurity has emerged as a hot topic this year, thanks to attacks such as the breach of the Democratic National Committee’s network and hackings of state voter registration databases in Arizona and Illinois, the threat posed by political bots – though they’re not illegal – shouldn’t be underestimated.

As we prepare for tomorrow’s headlines following tonight’s vice presidential debate, we must remember that social media holds unprecedented power in influencing opinion. Anything that taints the electronic public square to give one candidate an unfair leg up on the other should be considered a threat to our democracy.

Rami Essaid is CEO and cofounder of Distil Networks, a bot detection and mitigation company.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.