Facebook begins limiting Messenger forwarding in Myanmar
In a blog post this evening, Facebook pulled back the curtains on a few of its efforts to curb hate speech and other content that runs afoul of its community guidelines. The brief arrives roughly a month after the social network published its latest Community Standards Enforcement Report, in which it revealed that its automated tools now proactively detect 96.8% of certain categories of prohibited content before humans spot it.
The Menlo Park tech giant says it’s taking additional steps to address virility and reduce the spread of messages that can “amplify” and “exacerbate” conflict. To this end in Sri Lanka, following Facebook-owned WhatsApp’s decision earlier this year to reduce forwarded messages globally, Facebook says it’s imposed a similar restriction in Messenger that’s intended to prevent message sharing beyond a certain threshold of chat threads (five people).
Last February, WhatsApp took steps to tackle misinformation ahead of national elections in India, one of Facebook’s largest markets with over 200 million users. The app has been blamed for inciting violence that cost dozens of lives and contributing to ethnic violence, and for spreading hateful and racist messages about prominent political figures.
Meanwhile, in Myanmar, Facebook says it’s started reduce the distribution of content shared by users who have “demonstrated a pattern of posting content that violates [its] Community Standards.” If the policy proves successful in mitigating harm, Facebook says that it might introduce it in other countries.
Facebook reiterated that it’s increasingly using AI to detect abusive speech by adding memes and graphics that violate its policies to a photo bank, so that they’re automatically deleted in similar posts. The company also says that it’s identifying clusters of words i.e., graphs that might be used in hateful and offensive ways, and that it’s tracking how those clusters vary over time and geography to stay ahead of local trends.
Additionally, Facebook says that it’s leveraging AI to recognize posts that might contain graphic violence and potentially violent or dehumanizing comments in order to limit their spread. Facebook in May claimed that it now identifies 65% of the more than four million hate speech posts removed each quarter thanks to AI, up from 24% just over a year ago and 59% in Q4 2018.
Comments are closed.