The social network developed the ratings system in an effort to on its platform, Tessa Lyons, the product manager in charge of fighting misinformation at Facebook, said in an interview published Tuesday in The Washington Post.
Facebook relies on users, but Lyons told the Post that some users falsely report content as untrue just because they don’t agree with it. The tech giant now takes a user’s rating into consideration when battling misinformation.
A low trustworthiness score doesn’t entirely determine a person’s credibility, Lyons said, and users don’t get a single unified score. Instead, the score is reportedly one measurement among thousands of new behavioral clues Facebook considers.
The social network also reportedly monitors which users are more likely to flag others’ content as concerning and which users are considered more credible by others.
It’s unclear whether all users get a score and what all the factors are that Facebook considers. The company didn’t immediately respond to a request for comment.
This comes as Facebook has been under fire over Russian interference in the 2016 US presidential elections. The tech giant has been battling fake news and misinformation on its site while the national debate on the limits of free speech continues to take place on social media and in Silicon Valley.