It’s Election Day in the United States, so depending on where you live, that means free overcooked coffee, lengthy lines, or an “I VOTED” sticker.

But this is also the first election since 2016, when operatives both on the ground in the United States and on platforms like Facebook and Twitter intentionally made efforts to misinform, mislead, and manipulate.

National governments and ideologues of ill will are attempting to throw a monkey wrench in the political process again and sow mistrust and doubt in the democratic process.

Here’s a few ways trolls, governments, and generally awful people are attempting to disrupt the democratic process this Election Day.

To follow live updates on misinformation disseminated on Election Day, visit this Wired live blog.

Facebook groups

In the wake of the 2016 election, most attention was paid to Facebook ads and pages, but Jonathan Albright, director of research at Columbia University’s Tow Center for Digital Journalism, says misinformation campaigns have shifted to Facebook groups with tens of thousands or hundreds of thousands of users.

Albright carried out an analysis of thousands of posts in dozens of politically leaning Facebook groups and found that they have come to play “a major role in manipulation” and have become “the preferred base for coordinated information influence activities on the [Facebook] platform.”

Misinformation has transferred to groups for a number of reasons, including the fact that text, posts, and photos shared within a group aren’t discoverable by a basic Facebook search, and settings can be changed to ensure none of that content appears on your timeline.

For example, Albright found that while misinformation about George Soros funding a migrant caravan walking from Honduras was spread on Facebook and Twitter, the earliest publicly shared seeds of misinformation about the caravan came primarily from Facebook groups.

“As Facebook’s policing of its open platform began to clamp down on the most obvious actors and fake Pages following the last election, it was only a matter of time until the bad actors moved into Groups and started using them to coordinate their political influence operations and information-manipulation campaigns. We’re there now,” he wrote in a Medium post. “Facebook’s Groups offer all of the benefits with none of the downsides. Posts shared to the Group are essentially private until the time comes when the users take strategic action to make them public.”

These propagandists can be spotted because they will suggest users screenshot or copy-paste content in hopes of avoiding automated detection. They also tend to operate in groups without a moderator.

“I’ve seen a pattern of Groups over the past couple weeks without any admins or moderators. These ‘no admin’ groups are a wonderful asset for shadow political organizing on Facebook,” he wrote. “It’s like the worst-case scenario from a hybrid of 2016-era Facebook and an unmoderated Reddit.”

The long game hoax

Last week, ahead of Election Day, Twitter updated its rules to better deal with fake accounts and started to automate the detection of misinformation and things like fake Twitter profile photos.

That seems only natural since failure to police its platform prompted years of questioning whether social media platforms like Twitter or Facebook care about their users or can be trusted to police themselves.

NBC News witnessed this in action when a propagandist attempted to tell people to vote November 7 instead of November 6. NBC News followed the efforts of dozens of far-right wing trolls strategizing in Discord and Twitter DM group chat sessions, and while Facebook and Twitter made war rooms to head up defense against these attacks, not everything has been caught.

“NBC News witnessed trolls developing new strategies on the fly to circumvent the bans. Several were successful in creating unique identities that appeared to be middle-aged women who posted anti-Trump rhetoric as part of a long-term effort to build up followings that could later be used to seed disinformation to hundreds or thousands of followers.”

Snapchat filters were used in an attempt to fool Twitter’s automated detection of profile images chosen from stock imagery.

Classic state actors and methods

If you paid any attention to national politics in the United States within the past two years, you know that Russia attempted to meddle in the United States electoral process in 2016.

To avoid a repeat, Twitter purged more than 10,000 accounts in the lead-up to Election Day.

On Sunday after receiving word from U.S. law enforcement, Facebook removed more than 100 accounts, Pages, or Instagram accounts written primarily in Russian and French aimed at influencing elections.

In recent weeks, Facebook also banned more than 80 accounts, Pages, or groups thought to be associated with the Iranian government, one of which had more than one million followers. The Iranian operation was found to be anti-Trump, but just like Russian actors in 2016, these accounts focused on disseminating content meant to sow distrust and infighting around issues like immigration and race relations.

Months ahead of the election, the New York Times asked readers to share examples of misinformation they’ve come across, and the paper received more than 4,000 submissions.

Work by potential state actors includes an effort, discovered by the moderator of a popular subreddit for Donald Trump supporters, to conceal the web address of content from a website funded by the Russian Federal News Agency.

Among misleading or false claims were photoshopped images of candidates holding signs in support of Communism as well as others urging Democratic men to stay home using the hashtag #LetWomenDecide.

Not to be outdone, classic political action and campaign committees have also been found to spread misinformation, like a Facebook page post telling Republicans in North Dakota they could lose their hunting licenses for voting or a claim that Texas homeowners with Beto O’Rourke signs on their lawn could face fines of up to $500.