Twitter data confirms image cropping tool algorithm
Twitter has quantified the extent to which its image cropping algorithm was racially biased, admitting that white individuals were prioritised over Black individuals when images were algorithmically cropped on the platform.
In research [PDF] conducted by Twitter, the company tested its image cropping algorithm for race-based and gender biases and considered whether its model aligned with the goal of enabling people to make their own choices on the platform.
From looking at its crop imaging algorithm, which uses a saliency approach, it found that in comparisons of Black and white individuals, there was a 4% difference from demographic parity in favour of white individuals. When looking at comparisons of Black and white women, there was a 7% difference favouring of white women; for their male counterparts, white men were favoured 2% more from demographic parity for image crops.
Twitter started using a saliency approach to crop images in 2018.
Saliency algorithms work by estimating what a person might want to see first within a picture so that systems can determine how to crop an image to an easily-viewable size. These types of models are trained on how the human eye looks at a picture as a method of prioritising what’s likely to be most important to the most people, which Twitter said can be flawed as, in an image with multiple people, there is often no “ideal” solution.
The platform also looked at how the algorithm made decisions for image cropping when it came to gender. In comparisons of men and women, there was an 8% difference from demographic parity in favour of women.
It also tested for the “male gaze” by randomly selecting 100 male- and female-presenting images that had more than one area in the image identified by the algorithm as salient and observed how Twitter’s model chose to crop the image. In that testing, Twitter found no evidence of objectification bias, which meant certain body parts of women were not prioritised more when the algorithm was determining what to crop.
Twitter’s findings come in response to public outcry last year that criticised the platform’s image preview cropping tool, which appeared to automatically crop to white individuals when both Black and white individuals were present in an image.
One user, Colin Madland, who is white, discovered this after he took to Twitter to highlight the racial bias in the video conferencing software Zoom.
Since confirming this racial bias on Twitter, the platform concluded that the decision for how to crop an image should not be done by algorithms, and that it was a decision best made by humans instead.
“Even if the saliency algorithm were adjusted to reflect perfect equality across race and gender subgroups, we’re concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform,” Twitter software engineering director Rumman Chowdhury said on Wednesday.
“Saliency also holds other potential harms beyond the scope of this analysis, including insensitivities to cultural nuances.”
According to Twitter, these findings were what led to its decision to roll out improvements at the start of this month that changed how images are viewed and posted. Now, when users tweet a photo uploaded with their iOS or Android device, it appears in the timeline in its entirety.
Users are also able to preview what an image will look like in the tweet composer pre-post rather than have it cropped at the whim of Twitter’s cropping algorithm.
These changes and findings are part of Twitter’s commitment to continually test its algorithms for any biases post-outcry. The commitment also includes testing algorithms to give users more choice in how images appear on its platform.
“While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm,” Twitter CTO Parag Agrawal and CDO Dantley Davis wrote in an October blog post.