We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
An algorithm Twitter uses to decide how photos are cropped in people’s timelines appears to be automatically electing to display the faces of white people over people with darker skin pigmentation. The apparent bias was discovered in recent days by Twitter users posting photos on the social media platform. A Twitter spokesperson said the company plans to reevaluate the algorithm and make the results available for others to review or replicate.
— Marco Rogers (@polotek) September 19, 2020
Twitter scrapped its face detection algorithm in 2017 for a saliency detection algorithm, which is made to predict the most important part of an image. A Twitter spokesperson said today that no race or gender bias was found in evaluation of the algorithm before it was deployed “but it’s clear we have more analysis to do.”
Twitter engineer Zehan Wang tweeted that bias was detected in 2017 before the algorithm was deployed but not at “significant” levels. A Twitter spokesperson declined to clarify why there’s a gap in descriptions of bias found in the initial bias assessment and said the company is still gathering details about the assessment that took place before the algorithm’s release.
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf
— Jordan Simonovski (@_jsimonovski) September 20, 2020
On Saturday, algorithmic bias researcher Vinay Prabhu, whose recent work led MIT to scrap its 80 Million Tiny Images dataset, created a methodology for assessing the algorithm and was planning to share results via the recently created Twitter account Cropping Bias. However, following conversations with colleagues and hearing public reaction to the idea, Prabhu told VentureBeat he’s reconsidering whether to go forward with the assessment and questions the ethics of using saliency algorithms.
“Unbiased algorithmic saliency cropping is a pipe dream, and an ill-posed one at that. The very way in which the cropping problem is framed its fate is sealed, and there is no woke ‘unbiased’ algorithm implemented downstream that could fix it,” Prabhu said in a Medium post.
Prabhu said he’s also reconsidering the assessment because he’s concerned some people may use experimentation results to claim an absence of racial bias. That’s what he said happened with initial assessment results.
“At the end of the day, if I do this extensive experimentation … what if it only serves to embolden apologists and people who are coming up with pseudo intellectual excuses and appropriating the 40:52 ratio as proof of the fact that it’s not racist? What if it further emboldens that argument? That would be exactly contrary to what I aspire to do. That’s my worst fear,” he said.
Twitter chief design officer Dantley Davis said in a tweet this weekend that Twitter should stop cropping images altogether. VentureBeat asked a Twitter spokesperson about potentially getting rid of image cropping in Twitter timelines, about ethical questions surrounding the use of saliency algorithms, and what datasets were used to train the saliency algorithm. A spokesperson declined to respond to those questions but said Twitter employees are aware people want more control in image cropping and are considering a number of options.
Updated 10:09 a.m. September 21 to include responses from Twitter and Vinay Prabhu.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.