Facebook now gives users who flag fake news a credibility score | Industry

Breaking Tech Industry news from the top sources

Facebook is now giving users a between 0 and 1 to help determine whether or not they're accurately reporting instances of fake , the Washington Post reports.

Three years ago, Facebook first gave users the ability to report posts in their News Feed for containing false or misleading news. The hope was that users could help Facebook stop hoaxes from spreading more quickly.

However, Facebook quickly discovered that people weren't always reporting fake news because it was actually fake. Often, they were just flagging posts they disagreed with. The company also started employing third-party fact-checkers last year to be the final determiners of whether or not a post should be labeled as fake news — but that still doesn't stop people from incorrectly flagging articles in the first place.

“If someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person's future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true,” Facebook product manager Tessa Lyons, who confirmed the existence of the score, told the Washington Post in an email.

The Post reports that the score is just one criteria Facebook uses to determine whether a story should be reviewed further by its fact-checkers. But it's not giving away many more details about what goes into the algorithm, to stop people from gaming the system. Facebook also did not tell the Post when it started giving out credibility scores, and if all users have them.

This so-called reputation score highlights another way Facebook is attempting to deal with repeated instances of bad actors attempting to flout Facebook's terms of service, or target individuals they disagree with, in this case by falsely reporting their posts as hoaxes.

Earlier this year, Twitter also announced that it would be taking into account more behavioral signals when determining which search results and replies to show to users, such as how often a person gets blocked by people they reply to. The idea is that by placing more weight on these signals, Twitter can limit the ability of trolls and bad actors to hamper someone's experience on Twitter.

It's understandable why Facebook would want to take into account a user's past behavior to determine how credible they are likely to be going forward, but the news raises questions about how transparent social platforms needs to be when making changes to its algorithms that affect which users it is more likely to listen to, or whose posts will be shown more prominently. It's unclear when Facebook was planning on revealing to users, if ever, that they were being judged on how accurately they have been flagging posts.

VentureBeat reached out to Facebook for further comment on the news, and will update this story if we hear back.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More