“We are committed to leading the industry in the fight against online bullying, and we are rethinking the whole experience of Instagram to meet that commitment,” proclaimed Instagram head Adam Mosseri in a blog post this week.
Technology companies under increasing pressure to thwart online abuse are coming out with a steady stream of new tools and programs designed to make their platforms feel less like cesspools and more, well, “user friendly.”
A report published by a trio of U.K. universities back in April found that people under the age of 25 who are subjected to cyberbullying are more than twice as likely to self-harm or attempt suicide. “Self-Harm, Suicidal Behaviours, and Cyberbullying in Children and Young People” reviews cyberbullying studies from 1996 to 2017 that collectively involved 156,000 individuals from more than 30 countries.
A lot changed over that 21-year period, of course. For millions of young people, the only world they’ve ever known is one in which they communicate with family and friends through MySpace, Facebook, Instagram, Snapchat, WhatsApp, or other social media platforms. And any parent trying to ban their offspring from using such services faces an uphill battle from the outset.
Thus, any real hope of combating online abuse appears to lie with the platforms themselves. On Monday, Instagram — which now claims more than 1 billion users worldwide — announced the latest in a line of measures designed to thwart online bullies. The first of these new features is a tool that uses artificial intelligence (AI) to detect language and “encourage positive interactions.” In effect, a would-be abuser will receive an alert asking them if they want to reconsider posting a message, as shown in this screenshot.
In some regards, this feature is a little like Clippy’s “It looks like you’re writing a letter” guidance from ’90s-era Microsoft Word, repurposed for the abuse-laden Instagram age. But Instagram maintains that in early tests this message led to “some people” retracting their comments.
“This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification,” Mosseri said. “From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”
This particular feature also has remnants of the Messenger Kids Pledge Facebook announced last year, which is basically a set of “guiding principles” that “encourage the responsible use” of Facebook’s child-focused Messenger app.
As well-meaning as such initiatives may be, they don’t do much to curb genuine cyberbullying and abuse, which is where Instagram’s upcoming shadow banning feature comes into play.
One of the issues young people face in terms of dealing with online abuse is that they may interact with the perpetrator as part of a broader social peer group in real life — so proactively blocking, unfollowing, or reporting a bully can have repercussions for them. Nobody wants to be ostracized or seen as a “snitch.” To address this, Instagram will be introducing a new feature it’s calling Restrict, which when activated makes an individual’s comments on their target’s posts visible only to the recipient — though the target can choose to approve specific comments.
Moreover, the restricted individual won’t be able to see when their target is “active” on Instagram, or whether they’ve read a direct message.
This could prove a genuinely useful feature, because someone who posts mean comments won’t know that nobody else can see them. “We wanted to create a feature that allows people to control their Instagram experience, without notifying someone who may be targeting them,” Mosseri added.
While online abuse is something major tech platforms have tried to battle for years, many have been ramping up their efforts of late. Earlier this year, Alphabet’s Jigsaw launched a Chrome extension that lets you filter out abusive comments from all the major social networks, regardless of whether those attacks are directed at you personally. And Facebook issued a new one-strike policy for livestreamers in the wake of the New Zealand terrorist attack.
Besides cyberbullying efforts, YouTube also now offers a “kid friendly” version of the app, though that hasn’t been without its own controversy. And Facebook is also trying to hook kids in from a younger age with its Messenger Kids app. Meanwhile, Google is giving more control to worried parents with initiatives such as Family Link, which lets them control their kids’ devices remotely, and Apple has been turbocharging parental controls on iOS.
But the growing focus on cyberbullying and abuse points to a broader backlash against social media and its effect on society. Concerns cover everything from conspiracy videos on YouTube to Facebook posts promoting fake miracle cures.
Social networks — by definition — are designed to connect people remotely, but some studies suggest a causal link between social media use and decreased mental wellbeing, including loneliness and depression. A study published in the Journal of Social and Clinical Psychology last year found that limiting social media use to 30 minutes per day may lead to a “significant improvement in wellbeing.”
Elsewhere, some studies have indicated that there may be a correlation between “likes” received on a Facebook post and a person’s self-esteem, while others have suggested that for teenagers, receiving “likes” on their photos activates the same brain circuits as eating chocolate.
Earlier this year, Instagram began experimenting with keeping “like” counts private in an effort to encourage followers to “focus on the photos and videos [shared]” rather than the number of likes received. Twitter is dabbling with similar upgrades through its experimental Twttr app.
All sides of the spectrum
It’s worth stressing that the tech backlash is coming from all sides of the political and societal spectrum and includes citizens, politicians, governments, and Silicon Valley itself.
Donald Trump’s anti-tech proclamations are a regular occurrence, and just this week the White House is hosting a social media summit that “promises to be a carnival of conservative bias victimhood,” as one VentureBeat writer put it. Meanwhile, governments the world over are decrying the role encryption plays in helping criminals.
From within large tech companies, workers are pushing their employers to take a more ethical stance with regard to technologies they’ve developed. And many tech luminaries from Silicon Valley and beyond — the very people who helped create these technologies — prohibit their children from using technology and related services. Steve Jobs was known to operate a low-tech household; Tim Cook said last year that he doesn’t want his nephew on social networks; and Bill Gates didn’t allows his kids to have mobile phones until they were teenagers, with his wife Melinda saying that she regretted giving in even at that age. And countless other high-profile tech executives are joining the pushback against smartphones and social media, however ironic that may be.
Attempts to clean up social networks, whether through AI-powered anti-abuse tools or parental control mechanisms, are to be applauded. As long as young people are on these platforms, companies should be doing as much as they can to protect them — that is a given. But such efforts are in reality little more than bandaids, and it’s difficult to see how they will counter the swelling tide of anti-tech sentiment.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more