Presented by Two Hat Security
Efficient moderation and positive reinforcement boosts online community retention and growth. Catch up on this talk featuring analyst and author Brian Solis, along with Two Hat Security CEO Chris Priebe, about the changing landscape of online conversations, and how artificial intelligence paired with human interaction is solving current content moderation challenges.
“I want to quote the great philosopher Ice-T, who said recently, social media has made too many of us comfortable with disrespecting people and not getting punched in the mouth for it,” says Brian Solis, principal digital analyst at Altimeter and the author of Life Scale. “Somehow this behavior has just become the new normal.”
It seems like hate speech, abuse, and extremism is the cost of being online today, but it came out swinging back at the dawn of the internet, says Chris Priebe, CEO and founder at Two Hat Security. Anyone can add content to the internet, and what that was supposed to offer the world was cool things like Wikipedia — everyone contributing their thoughts in this great knowledge share that makes us strong. But that’s not what we got.
“Instead we ended up learning, don’t read the comments,” Priebe says. “The dream of what we could do didn’t become reality. We just came to accept in the 90s that this is the cost of being online. It’s something that happens as a side effect of the benefits of the internet.”
And from the beginning, it’s been building on itself, Solis says, as social media and other online communities have given more people more places to interact online, and more people emboldened to say and do things they would never do in the real world.
“It’s also being subsidized by some of the most popular brands and advertisers out there, without necessarily realizing that this is what they’re subsidizing,” he adds. “We’re creating this online society, these online norms and behaviors, that are being reinforced in the worst possible way without any kind of consequences or regulation or management. I think it’s just gone on way too long, without having this conversation.”
Common sense used to tell us to be the best person online that you are in the real world, he continues, but something happened along the way where this just became the new normal, where people don’t even care about the consequence of losing friendships and family members, or destroying relationships, because they feel that the need to express whatever’s on their mind, whatever they feel, is more important than anything else.
“That’s the effect of having platforms with zero guidelines or consequences or policies that reinforce positive behavior and punish negative behavior,” Solis says. “We wanted that freedom of speech. We wanted that ability to say and do anything. These platforms needed us to talk and interact with one another, because that’s how they monetize those platforms. But at the end of the day, this conversation is important.”
“We reward people for the most outrageous content,” Priebe agrees. “You want to get more views, more likes, those kinds of things. If you can write the most incredible insult to someone, and really burn them, that kind of thing can get more eyeballs. Unfortunately, the products are designed in a way where if they get more eyeballs, they get more advertising dollars.”
Moderation isn’t about whitewashing the internet — it’s about allowing real, meaningful conversations to actually happen without constant derailment.
“We don’t actually have free speech on the internet right now,” says Priebe. “The people who are destroying it are all these toxic trolls. They’re not allowing us to share our true thoughts. We’re not getting the engagement that we really need from the internet.”
Two Hat studies have found that people who have a positive social experience are three times more likely to come back on day two, and then three times more likely to come back on day seven. People stay longer if they find community and a sense of belonging. Other studies have shown that if users run into a bunch of toxic and hateful content, they’re 320 percent more likely to leave, as well.
“We have to stop trading short-term wins,” Priebe adds. “When someone adds content, just because a whole bunch of people engage with it because it’s hateful and creates a bunch of ‘I can’t believe this is happening’ responses, that’s not actually good eyeballs or good advertising spend. We have to find the content that causes people to engage deeper.”
“The communities themselves have to be accountable for the type of interaction and the content that is shared on those networks, to bring out the best in society,” Solis says.” “It has to come down to the platforms to say, what kind of community do we want to have? And advertisers to say, what kind of communities do we want to support? That’s a good place to start, at least.”
There are three lines of defense for online communities: applying a filter, backed by known libraries of specifically damaging content keywords. The second line of defense helps the filter narrow down on abusive language, by using the reputation of your users — by making the filter more restrictive for known harassers. The third line of defense is asking users to report content, which is actually becoming required across multiple jurisdictions, and community owners are being required to deal with those reports.
“The way I would tackle it or add to it would be on the human side of it,” Solis adds. “We have to reward the type of behaviors that we want, the type of engagement that we want. The value to users has to take incredible priority, but also to the right users. What kind of users do you want? You can’t just go after the market for everyone anymore. I don’t think that’s good enough. Also, bringing quality engagement and understanding that the numbers might be lower, but they’re more valuable to advertisers, so that advertisers want to reinforce that type of engagement. It really starts with having an introspective conversation about the community itself, and then taking the steps to reinforce that behavior.”
To learn more about the role that AI and machine learning is playing in accurate, effective content moderating, the challenges platforms from Facebook to YouTube to LinkedIn are having on- and offline, and the ROI of safe communities, catch up now on this VB Live event.
Don’t miss out!
Access this free event on demand now.
You’ll learn:
- How to start a dialogue in your organization around protecting your audience without imposing on free speech
- The business benefits of joining the growing movement to “raise the bar”
- Practical tips and content moderation strategies from industry veterans
- Why Two Hat’s blend of AI+HI (artificial intelligence + human interaction) is the first step towards solving today’s content moderation challenges
Speakers:
- Brian Solis, Principal Digital Analyst at Altimeter, author of “Lifescale”
- Chris Priebe, CEO & founder of Two Hat Security
- Stewart Rogers, VentureBeat