As we gear up for what will surely be a combative presidential term, and as Twitter’s most toxic users succeed in poisoning the service, more pressure should be placed on the company to ensure that updated policies and tools are in place to protect users.
Now is the time for Twitter to prove it has what it takes to curb abuse.
Company chief executive Jack Dorsey has publicly come out against harassment on Twitter, saying in July that “abuse is not part of civil discourse. It shuts down conversation and prevents us from understanding one another. No one deserves to be a target of abuse online and it doesn’t have a place on Twitter.” His statement came amid outcry over public figures like Leslie Jones abandoning the service after being attacked by users without any satisfactory resolution. Ultimately, Twitter terminated the offending accounts and banned Breitbart News editor Milo Yiannopoulos.
With Donald Trump’s rise to the presidency comes a reinvigorated spirit among trolls and those harboring hate for women, people of color, Muslims, Jews, and anyone not conforming to their vision of what the U.S. should be. USA Today reported earlier this week that white supremacists are ganging up on supporters of former Secretary of State Hillary Clinton, urging them to commit suicide. This is only one case in the sea of toxicity that has emerged in just the past couple of days.
Conversations will continue on Twitter, but the platform must provide not only defenses to its users, but for developers, who can filter out hurtful and demeaning tweets so that ideas and knowledge are shared. Dissenting opinions, debate, and dialogues of all sorts has its place on these platforms, but with constant abuse and harassment with little remedy can not only diminish usage but dissuade others from participating in conversations.
As more racism, sexism, harassment, and abuse comes out, it’s imperative that social media companies enact policies, create better tools, and empower third-party developers to protect users so a free-flowing exchange of information can occur. If Twitter wants to be a place for news and real-time conversation — the modern town square — it needs to do more to moderate its most vicious users.
In an SEC filing last month, Twitter promised it would implement “meaningful” safety updates in November, but has so far failed to provide specifics, instead punting the decision until after the election. As Twitter’s user-base stagnates and investors grow increasingly skeptical of its core business, what will Twitter do next? Under what will likely become a polarizing presidential term, it’s time for Twitter to put up or shut up.
On Wednesday, Dorsey tweeted that he believes “all people are created equal,” but what Twitter has shown signs of, at least publicly, is that trolls and abusers have more rights and protections than ordinary users — the users who still struggle to find a reason to use the service in the first place. The company may very well be working on things behind the scenes, but it’s not doing a good enough job convincing its users and everyone else that it can stand up to bullies and offer the service most people want.
To be fair, Twitter offers users some ways to fight back. There’s a report feature that lets you notify Twitter of harassment, and the company’s quality filter setting is now available to everyone (previously it was only available to those with verified accounts).
And Twitter has engaged with groups such as Samaritans, GLAAD, and Women, Action, and the Media to better educate it on ways to protect its users. But what’s the next logical step to empower people to defend themselves?
With Facebook, people have to use their real names, but on Twitter you can be anyone. With Twitter scaling beyond just its traditional service to now include live video, how will well-baked safety mechanisms factor in? Twitter has to prove it is ready for this task — that it’s not about just making it easier to tweet, but also that it’s a reasonably safe place to do so.