Twitter new ‘crisis misinformation’ policy rolls out
Twitter is rolling out a new ‘crisis misinformation’ policy designed to tackle “situations of armed conflict, public health emergencies, and large-scale natural disasters,” Yoel Roth, Twitter’s Head of Safety & Integrity wrote in a blog post.
The new policy announcement comes even as Twitter is engaged in an acquisition deal with Tesla boss Elon Musk, who has made his views on ‘content moderation’ known via various tweets and posts. Musk has also insisted that the deal with Twitter cannot go ahead till the platform confirms the number of bots or fake users; Twitter pegs the number at 5 per cent, a claim that Musk is not buying.
Twitter defines “crises as situations in which there is a widespread threat to life, physical safety, health, or basic subsistence.” It adds that in order to determine whether a claim is misleading it will relies on “verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.”
This will be a global policy which will “help to ensure viral misinformation isn’t amplified or recommended” by the platform during crises, adds the blog. The post notes that as soon Twitter has evidence “that a claim may be misleading, we won’t amplify or recommend” this content across the platform.
This includes showing it on the Home timeline, Search, and Explore section of the app or website. Twitter will also “prioritise adding warning notices to highly visible tweets and tweets from high profile accounts, such as state-affiliated media accounts, verified and official government accounts,” which contain such false information.
Tweets with content that violate the crisis misinformation policy will be placed behind a warning notice which reads, “This Tweet violated the Twitter Rules on sharing false or misleading info that might bring harm to crisis-affected populations. However, to preserve this content for accountability purposes, Twitter has determined this Tweet should remain available.”
To be clear, Twitter will not be taking down the information which might be misleading, just limiting its reach.
According to the blog post, examples of content which include the warning on Twitter for false content or misinformation include:
- False coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves;
- False allegations regarding use of force, incursions on territorial sovereignty, or around the use of weapons;
- Demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations;
- False information regarding international community response, sanctions, defensive actions, or humanitarian operations.
- Strong commentary, efforts to debunk or fact check, and personal anecdotes or first person accounts do not fall within the scope of the policy.
So what happens when Twitter adds a warning notice to a piece of misinformation? Well, users will still be able to see after clicking through the warning notice. But the content won’t be “amplified or recommended across the service.” Further, Twitter will disable the option to like, retweet or share that particular piece of content.
“We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events,” adds the blog post.
The first iteration of this policy is focused on international armed conflict, starting with the war in Ukraine, and Twitter plans to “update and expand the policy to include additional forms of crisis.” “The policy will supplement our existing work deployed during other global crises, such as in Afghanistan, Ethiopia, and India,” the company said.