AI-powered tool aims to help reduce bias and racially charged language on websites
The tool, Content Moderator, flags content for review, and nothing is deleted or removed without approval from site administrators, according to UserWay.
UserWay’s customers are using its AI-powered accessibility widget, an advanced AI-based compliance-as-a-service (CaaS) technology that ensures brands provide an accessible digital experience that meets strict governmental and ADA regulations, the company said.
“Focusing on digital racism and bias is long past due, and our team is eager to contribute to the conversation positively,” UserWay founder and CEO Allon Mason said in a statement.
In June, Google announced that it would be reevaluating what it considers acceptable language, Mason noted. So far, Google has changed terms including “blacklist” to “blocked list,” “whitelist” to “allowed list,” and “master-slave” to “primary/secondary,” among others, he said.
“That was the spark that triggered us to build this tool. At the time, we were enhancing our AI-powered capabilities that supply [alternate] text descriptions of images for screen readers,” Mason said. “We realized that if word choices can make our customers’ digital content inaccessible even without intending to, UserWay should help.”
The goal of the Content Moderator isn’t to censor or silence, he added, but to make web teams aware of problematic language in user-generated content or in content they may have overlooked.
Discriminatory language on websites is pervasive
Before launching Content Moderator, UserWay ran its rule engine across more than 500,000 websites. The findings were concerning, the company said.
Some 22% of the sites scanned contained some form of biased, racially charged, or offensive language, UserWay said. Of those:
- 52% were sites with instances of racial bias
- 24% were sites with instances of gender bias
- 12% were sites with instances of age bias
- 5% were sites with racial slurs
- 3% were sites with disability bias
Words that the tool most often flagged for gender bias included “chairman,” “fireman,” “mankind,” “forefather,” and “man-made,” UserWay said.
Many of these terms have only recently been understood to be divisive and prejudicial. It is an enormous task for most site owners to keep track of the latest consensus around culturally sensitive terms, the company noted. The tool aims to make this task simple, centralized, and scalable, UserWay said.
How Content Moderator works
Historically, content moderation software using AI to detect racial bias and divisive speech has been site-specific, expensive, and available only within large social media platforms, the company maintained. A website owner can drop in the UserWay widget and will be alerted to divisive or offensive language as it appears, in real time. The widget works in three steps:
- Scan: Content Moderator scans all the content on a website, both static and dynamic.
- Flag: The tool then flags words and phrases that may inadvertently promote stereotypes or prejudice, including text that could be considered racist, sexist, anti-Semitic, homophobic, xenophobic, violent, intolerant, or otherwise offensive.
- Review: Site administrators review the suggestions and choose the ones they would like to accept. They can also edit the suggestions to flow with the site’s content or recommend alternative replacements that are then fed back into UserWay’s AI.
More inclusive speech is needed now
In the past few weeks, many legacy brands such as Aunt Jemima, Uncle Ben’s, and Eskimo Pie, among others have yielded to mounting pressure from consumers to rid products of racial and ethnic stereotypes. Technology companies have likewise been reevaluating the usage of racialized words like “blacklist” and “whitelist” in favor of more inclusive language
But brand integrity isn’t the sole issue. Civil rights advocates, led by the Anti-Defamation League (ADL), have increased pressure to ensure websites are carefully moderated, and recent calls for repeal of Section 230 of the US Communications Decency Act may expose online publishers to future legal action for defamation based on opinions or reviews created by platform users, according to UserWay.
In tandem with UserWay’s Accessibility Widget, Content Moderator helps organizations mitigate the legal risk of both ADAADL-related violations, the company said
“We all know a list of words that are mocking (to put it mildly) of a variety of racial groups, or a variety of religious groups, or other political or gender persuasions,” UserWay quoted Israel W. Charny, Israeli psychologist, genocide scholar, and executive director of the Institute on the Holocaust and Genocide in Jerusalem, as saying. “UserWay’s … tool flags these words and allows you to change them, an act of voluntary editing with cultural sensitivity. Giving options for improvement reduces the onus of the coerciveness that some people are feeling.”
In the same way that HTML code is remediated, Content Moderator can help users pinpoint and update word choices on their site, Mason said.
“While Google and Apple are approaching the issue as a simple search-and-replace, UserWay looks deeper into the problem of bias,” he said.
The tool looks to detect verbalization patterns that consistently and routinely marginalize and disempower specific cohorts, he said. Its dictionary is frequently updated to align with cultural and social changes.
A content owner can choose to agree, modify, or ignore the Content Moderator’s suggestions, Mason added.
“We intend to empower users by making them aware of the content that exists on their site–especially legacy and user-generated text that may not reflect their brand values,” he said. “More importantly, we hope that by removing blatantly and subtly offensive content, we can help these sites become barrier-free and inviting for all users.”