Twitter Seeks Transparency In Its Latest Safety Efforts | Social Media

“You don’t want to see a Tweet you’ve reported, but you do want to know we’ve done something about it. And all those Tweets that break our rules? You should know we’ve done something about them too.”

On October 17th, announced the changes to its reporting measures with the above tweet. The “report” function, designed to cull offensive content and harassment, has repeatedly come under fire by the platform’s users. This change aims to address two challenges inherent in prior iterations of the process. One, a user who reports a tweet is rarely informed of a claim’s outcome; two, the objected-to content was visible as Twitter made its ruling.

In a compromise, Twitter addressed the latter with a shield to hide the post—unless the user requests to see it. “Before, Twitter had experimented with both showing or hiding the tweet you reported, but users told the company they sometimes needed to refer back to the tweet – like when they’re trying to report it to law enforcement, for example,” TechCrunch reported. “Now, Twitter says it will hide the tweet behind an informational notice, but allow you to tap the notice to view the tweet again.”

In regards to the former, updates on the timeline will distinguish between a post removed by Twitter for violation of the site’s guidelines, and a post removed by a user. If the post is a violation, “[Twitter] will display a notice that states the tweet is unavailable because it violated the Twitter Rules. This will also include a link to those rules and an article that provides details on how Twitter enforces its rules.”

These measures are the most significant Twitter has taken to date in cracking down on the harassment culture. Even after a wave of reforms was announced in December 2017, hate speech, violence, and user targeting persists, much of it allegedly still falling within the company’s user guidelines. While these latest changes are a step in the right direction, many have argued that other features (Moments, circular avatars, character limits and live programming) have wrongfully taken precedence over their vital need to make the space safer for users.

Even as these changes take hold, the burden remains largely on the victim of objectionable content to report it to Twitter. Twitter’s CEO Jack Dorsey claimed naiveté more than once as the issue grew, but former Reddit CEO Ellen Pao has another theory: there’s no incentive for him to care. “[S]ocial companies and the leaders who run them are rewarded for focusing on reach and engagement, not for positive impact or for protecting subsets of users from harm,” Pao wrote for WIRED earlier this month. “If they don’t need to monitor their platforms, they don’t need to come up with real policies—and avoid paying for all the people and tools required to implement them.”

Now that voices of hate and harassment are hurting Twitter’s bottom line, it’ll be interesting to watch how change is received and what additional measures Twitter will take to satisfy the concerns of critics and their most vulnerable users.

Learn the latest trends, insights and best practices from the brightest minds in media and technology. Sign up for SMW Insider to watch full-length sessions from official Media Week conferences live and on-demand.

You might also like More from author