Anyone who has spent even a short amount of time perusing the comments under stories on news sites or social networks will know all too well how unpleasant the experience can be, thanks to troll-powered abuse. That is why Google’s Counter Abuse Technology team partnered with sister company Jigsaw back in 2017 to launch Perspective, a publisher-focused API that leverages machine learning to automatically detect toxic comments.

While that program has continued to expand, most recently to target Spanish-language news platforms, it has so far been focused on empowering publishers and community managers to filter out abuse from their sites. But from today, Jigsaw — the in-house incubator from Google’s parent company, Alphabet — will be expanding the scope of its product to give internet users themselves more control over the comments they’re subjected to.

In tune

By installing a new Chrome extension called Tune, which requires you to sign into your Google Account through the browser, you’re invited to control the level of toxicity that you’re willing to see on five social platforms: YouTube, Twitter, Facebook, Reddit, and Disqus. You can select as many of these as you like.

Above: Tune: Jigsaw’s toxic comment filter

After clicking the Tune icon at the top of your browser while on one of the aforementioned websites, you’ll be presented with a digital volume dial that you can rotate through various filters, from “Show all” to “Hide all.”

On “full volume,” you’ll see everything: profanities, insults, personal attacks, and the rest.

Above: Tune: Jigsaw’s toxic comment filter (full volume)

When the setting is lowered all the way to the left, you’ll see almost no comments at all.

Above: Tune: Jigsaw’s toxic comment filter (lowest volume)

It’s worth noting here that Tune isn’t really designed to be a perfect product — it’s an experiment. Based on our brief tests, it certainly didn’t hide everything, and it was difficult to spot the differences between some of the settings in terms of the comments it would show. Still, it helps highlight efforts to scale online abuse management — it’s simply not possible for humans to do it alone.

Moreover, not every individual has the same threshold for viewing toxic comments. For many people, it’s water off a duck’s back. For others, reading garbage comments is a thoroughly depressing experience that makes them want to switch off completely. That’s why the general idea behind Tune is sound — it transfers that power from the publisher to the reader.

Fresh perspective

Perspective kicked off more than two years ago in English, starting with the New York Times, and it later expanded to the Guardian, the Economist, Wikipedia, El País, and more. Though Tune is a different product from the Perspective API, they share the same underlying intelligence. And while Tune is far from a finished product, Jigsaw said it hopes to encourage other developers to build on its idea — as such, Tune is an open source project available to anyone on GitHub.

“Tune builds on the same machine learning models that power Perspective to let people set the ‘volume’ of conversations on a number of popular platforms, including YouTube, Facebook, Twitter, Reddit, and Disqus,” noted Jigsaw product manager CJ Adams, in a blog post. “We hope Tune inspires developers to find new ways to put more control into the hands of readers to adjust the level of toxicity they see across the internet.”

A Pew Research Center report back in 2017 found that 40 percent of internet users in the U.S. had experienced online harassment. More pertinent to Jigsaw’s Tune extension, however, the report found that roughly a quarter of Americans have decided not to post an online comment for fear of harassment, while more than 10 percent have elected to not use a website at all.

Naturally, Alphabet does not want fear of harassment to permeate its online services, though it’s also eager to point out that its own YouTube isn’t the only platform plagued by abuse. This is why it’s making Tune available for use on the other major social networks.

“Most of us spend more time reading online comments than writing or moderating them,” Adams added. “As we read, a single toxic post can make us give up on a discussion completely and miss out on reading valuable thoughts buried underneath the shouting. Toxicity also has a chilling effect on conversations, making people less likely to join discussions online if they fear their contribution will be drowned out by louder, meaner voices.”