Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Twitter is providing support to two groups of academics looking to study the prevalence of some of the social platform’s most problematic content — as well as how Twitter can make people more open to different viewpoints.
In March, Twitter put out a request for proposals, asking academics what kind of metrics the company should use to determine how healthy the discourse on Twitter is. Twitter looked for researchers who were willing to produce “peer-reviewed, publicly available, open-access research articles and open source software whenever possible.” The company said it received 230 proposals, and organized a review committee consisting of individuals from a variety of departments within Twitter to judge the proposals.
The company announced today that it settled on two proposals to support. Both groups of researchers will receive access to public data and funding from Twitter, though a company spokesperson declined to say how much funding.
One, led by a researcher at Leiden University in the Netherlands, will come up with metrics to determine the prominence of echo chambers and uncivil discourse on Twitter. With regards to uncivil discourse, the researchers are looking to create algorithms that can distinguish between “incivility” and “intolerance” in Twitter conversations — the researchers define the latter as “hate speech, racism, and xenophobia,” while incivility is dialogue that “breaks the norms of politeness.”
The second study, led by a pair of researchers at The University of Oxford and the University of Amsterdam, will look at how interacting with people on Twitter who represent a wide variety of backgrounds and perspectives can potentially decrease prejudice and discrimination among users.
Previous research has shown that different political groups have very little interaction with one another on Twitter. Of course, some users may have become polarized in the first place because social platforms like Twitter and Facebook use algorithms to steer users towards more of the content and users that they are likely to interact with the most — often, ones who share their same political and social views.
Additionally, the success of these studies will be determined by whether or not Twitter actually listens to feedback from the researchers and uses it to make meaningful changes to the platform. The company has spent the past few months highlighting changes that it believe will lead to a “healthier conversation” on Twitter, such as acquiring an anti-spam startup called Smyte that has developed review tools to catch spam and hateful content, and limiting the visibility of tweets from trolls and bullies.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more