Facebook’s AI matches people in need with those willing to assist

Listen to this article

Facebook says it has deployed a feature in its Community Help hub to make it easier for users to each other during the pandemic. As of this week, AI will detect when a public post on News Feed is about needing or offering help and will surface a suggestion to share it on Community Help. Once a post is moved or published directly to the hub, an algorithm will recommend between people.

For example, if someone posts an offer to deliver groceries, they’ll see recommendations within Community Help to connect with people who recently posted about needing this type of assistance. Similarly, if someone requests masks, AI will surface suggested neighbors who recently posted an offer to make face coverings.

Building this Community Help feature, which Facebook says is available in all countries in English and 17 other languages, involved a difficult engineering challenge because the system needs to make recommendations even when semantic structures in posts are very different. (For example, consider “Does anyone have masks for kids?” and “We can donate face coverings of any size.”) The feature also needs to go beyond existing candidate-matching logic to incorporate general statements like “I can lend a hand to anyone!”

Facebook says it built and deployed the matching algorithm using XLM-R, its natural language understanding model that produces a score ranking how closely a request for help matches offers in a community. XLM-R, which has 550 million parameters (variables internal to the model that fine-tune its predictions), was trained on 2.5 terabytes of webpages and can perform translations among roughly a hundred different human languages.

The system integrates posts’ score into a set of models trained on PyText, an open source framework for natural language processing. People needing or offering help receive matches through an overlay that suggests them after a post publishes, via a match notification that provides updates on matches in the system and on the Community Help page.

When asked how Facebook is mitigating potential bias in the model against users’ requests and preventing explicit or otherwise inappropriate requests from making their way onto Community Help, a spokesperson said via email that Facebook engineers ran offline experiments to understand how fair the model was performing and “confirmed there was very little deviation in precision.” The spokesperson added that “integrity classifiers” proactively detect and flag posts to a reviewer if they are detected as possibly policy-violating.

Beyond XLM-R, Facebook says it’s employing a specialized technique XLM pretraining to detect requests for assistance and intent to offer help in public posts. Available in more than a dozen languages, it’s what surfaces suggestions to publish requests on the Community Help in order to reach more people. Facebook claims 50% of posts in the hub come from this model.

“Just as people are drawing strength from neighbors to cope with COVID-19, they are also leaning on each other to navigate remote learning brought on because of the pandemic,” Facebook wrote in a blog post. “We hope these efforts will make it easier for people to help others in their community.”

Facebook first launched Community Help in 2017 to give users a way to offer assistance and search for help in the wake of a crisis. Within Facebook’s COVID-19 Information Center, the feature facilitates connections among users in countries including the U.S., Canada, France, U.K., and Australia. It also recommends charities including the UNF/WHO COVID-19 Solidarity Response Fund Facebook Fundraiser and the Combat Coronavirus with the U.S. Centers for Disease Control and Prevention Foundation Facebook Fundraiser.

You might also like More from author

Comments are closed.