Think twice before you start throwing shade on Facebook, ladies and gentlemen; you’re being watched, especially when it comes to bullying, threats, insults, and spam. In fact, you might even attract the attention of law enforcement.
Some time ago, Facebook started allowing users to report rude or aggressive posts, tags, and comments. Over time, the social reporting system has evolved to give folks more and better tools for putting a stop to content they don’t want to see.
In a new blog post, Facebook staff have revealed the man behind the curtain when it comes to that flagging process and what happens when you’re done clicking and typing — and it’s one heck of an elaborate flowchart.
“Facebook has dedicated teams working all day, every day to handle reports about policy violations, such as explicit photos, impostor accounts, or offensive content, etc.,” a Facebook rep told VentureBeat via email.
“But what actually happens once a report gets filed isn’t always clear — this infographic … will be a guide to better understand how different situations are processed.”
For example, some particularly nasty posts will get directed to the attention of Facebook’s internal Hate & Harassment team. If there’s strong evidence that suggests a person might actually be in harm’s way, such as with suicidal content or content related to drug use, Facebookers might even have to bring local police into the situation. For less dire straits, Facebook’s software might help you report questionable content to a close group of friends, or the Facebook team might send a formal warning to the rude person from whence the offending content originated.
And of course, Facebook gets a fair amount of guidance from partners such as its LGBT-friendly Network of Support, the minors-focused Safety Advisory Board, and similar groups and organizations.
Here’s the full flowchart; click to see a larger version: