There are a number of disturbing elements to the Russian election hacking scandal — most notably the fact that a foreign entity was somehow able to use Facebook and other social networks to influence an already stressful U.S. election.
But what has baffled me most about the hacking scandal is that the Russians were able to proceed with their agenda with such ease, while my influencer marketing agency (which has no interest in nefarious global domination) struggles on a daily basis to do even the most basic tasks on the platform.
As the Wall Street Journal noted recently, “Relying on AI can lead to false positives, as when [a] company pulls down legitimate content that its algorithms think might be offensive.”
My firm does a high volume of work on Facebook to amplify our influencers’ content. Brands often hire us to create programs with quick turnarounds and tight deadlines, such as a one-day event or a short-term giveaway. Having that (legitimate) content pulled down because an algorithm suspects it is offensive can throw an entire program off the rails.
We have had to develop an entire document to detail the various “watch-outs” we need to keep in mind in order to take advantage of Facebook’s reach without raising the ire of its finicky and overly critical computer algorithms. These trigger-happy software programs can be incredibly problematic. It has been our experience that Facebook’s software doesn’t hesitate to remove content first and ask questions later.
VentureBeat reached out to Facebook on this, and Facebook maintains that it’s not the case that it will remove content before properly reviewing it. It said it reviews all advertisements before they go live, as per the ad review process posted on its site and that, for other content, it will only pull something down if violates Facebook policies.”
It has been my firm’s experience that while non-ad content can be posted without review, it will indeed get automatically pulled if it flags the algorithm. It will then often be re-posted following an appeal. We’ve seen the same thing on Facebook-owned Instagram. An influencer we work with once had her entire Instagram account shut down following the posting of a piece of sponsored content. It was reinstated after a couple of weeks of desperate emails and phone calls to Instagram headquarters.
You can find the full list of Facebook’s prohibited content here. While the vast majority of these restrictions are understandable — certainly we don’t want people posting ads that promote illegal activities, discrimination, or weapons — many are not that clear cut. The machinations our team goes through in order to get clients’ brands boosted on Facebook are borderline overwhelming. Let’s look at a couple of examples:
1. The tricky word “you”
Facebook prohibits the use of personal attributes within a boosted post. This can be really tricky when promoting a product intended to relieve an ailment. Here are excerpts from two posts that we were looking to boost in one of our programs:
- Post A: Do you suffer from dandruff? It’s a common problem….
- Post B: Laura is preventing dandruff now and during the colder months…
Given the strict algorithm, we were only allowed to boost Post B. Post A would almost certainly be pulled down because of the word “you.”
Facebook’s spokesperson said that while the company does allow ads for services that treat medical conditions, they don’t allow ads to imply the reader suffers from those medical conditions (that falls under the company’s “Personal Attributes” policy). Facebook noted that our Post A example wouldn’t make the cut because it implies the reader has dandruff, which is a medical condition.
What frustrates my team in this situation is that our content wasn’t implying anything. It was simply asking the question. If an influencer with thousands of followers poses a question, how is that an implication of anything? These restrictions impact the creativity of our influencers’ content.
2. Medical conditions
The Facebook algorithm is also triggered by mentions of specific medical conditions. Understandably, Facebook prohibits the paid promotion of pharmaceutical products. Unfortunately, this can cause the algorithm to mistakenly flag a post that is related to a medical condition but not related to pharmaceuticals. We recently did a program for a medication-free treatment for depression. The post below, promoting this new treatment, would surely be flagged due to the mention of depression (with bonus flagging for the use of the word “you”):
- “Let’s talk about depression. If you deal with depression, you are not alone. It is a very real condition that affects people in all stages of life, myself included.”
Facebook told VentureBeat the above post was not allowed because the statement “you are not alone” implies the reader has depression. Our firm feels that the use of the qualifier “if” should make this okay. We do see how the text is off-limits given Facebook’s guidelines, but it still feels like we’re running into roadblocks left and right when trying to do our job — and it’s not as if we’re hacking an election, promoting guns, or selling drugs.
Though the post was honest and authentic, and written by one of our influencers, we couldn’t take the risk of a flag, so instead we had our influencer post the following (less engaging, highly sanitized) statement:
- “My commitment to wellness has me focusing on happiness this month. One thing that helps is reading inspirational stories about other people who are living their best life.”
3. Pictures with words
Images that contain words have also become problematic. Facebook’s spokesperson said it limits words in images in order to create the best possible user experience and that research shows too much text in images damages the experience. While that’s helpful insight, we have seen a marked lack of consistency on the amount of text that works. Additionally, even if the algorithm doesn’t pull down one of our images, too many words can also impact the performance of the post. As Facebook details here, it has “recently implemented a new solution that allows ads with greater than 20 percent text to run, but with less or no delivery.” Certainly this is not any kind of solution for our clients, all of whom expect better performance than “less or no delivery.”
We are all big fans of Facebook at my firm, for both personal use and for its tremendous ability to get the word out on behalf of our clients. But we can’t help but feel frustrated when we repeatedly run into roadblocks as a result of an overactive algorithm.
Facebook serves as a vital piece of our marketing mix. We just wish it would become a bit more reliable. What do the Russians know that we don’t?
Danielle Wiley is CEO of Sway Group, an influencer marketing agency.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more