How to Avoid Bias in Your AI Implementation
In most circles, the word “bias” has obviously negative connotations. Regarding the media, it means news is slanted one way or another. In science, it means preconceived notions led to inaccurate conclusions. When it comes to artificial intelligence, the bias of those who program the software and the data from which it learns can lead to unsatisfactory results.
Any bias is a deviation from reality when collecting, analyzing, or interpreting data. Intentional or not, most people are somewhat biased in how they view the world, which affects how they interpret data. As technology plays more crucial roles in everything from employment to criminal justice, a biased AI system can have a significant impact.
Before humans can trust machines to learn and interpret the world around them, we must eliminate bias in the data that AI systems learn from. Here’s how you can avoid such bias when implementing your own AI solution.
1. Start with a highly diversified team.
Any AI system’s deep learning model will be limited by the collective experience of the team behind it. If that team is siloed, the system will make judgments and predictions based on a highly inaccurate model. For Adam Kalai, co-author of the paper “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,” eliminating bias in AI is like raising a baby. For better or worse, the baby or AI system will think how you teach it to think. It also takes a village. So put together a highly diverse team to head up your AI effort. You’ll be more likely to identify nuanced biases earlier and more precisely.
To reduce hiring bias when assembling your team, examine the language of your job ads and remove biased wording. The word “ninja,” for example, may seem to make your job ad more compelling. However, it could deter women from applying because society views the word as masculine. Another tactic is to reduce the number of job requirements, listing them as preferred qualifications. That will likewise encourage more female candidates to apply not because they don’t have such credentials, but because they tend not to apply unless they have all of them. Finally, create standard interview questions and a post-interview debriefing process to ensure all interviewers at your company are working within the same framework when assessing job candidates.
2. Have your diverse team teach your chatbots.
Like humans, when bots have more data and experiences to draw from, they make smarter choices. “Collect enough data for your chatbot to make good decisions. Automated agents should constantly learn and adapt, but they can only do that if they’re being fed the right data,” says Fang Cheng, CEO and co-founder of Linc Global. Chatbots learn by studying previous conversations, so your team should be feeding your bot data that teaches it to respond in the way you want it to. For instance, Swedish bank SEB has even taught its virtual assistant Aida to detect a frustrated tone in a caller’s voice, at which point the bot knows to pass the caller along to a human representative.
To accomplish something similar without falling prey to bias, you may need to create data sets that provide your bot with examples from multiple demographics. Put a process in place to detect issues. Whether you use an automated platform or manually review customer conversations, search for patterns in customer chats. Do customers opt for a human representative or appear more frustrated when calling about a specific issue? Do certain customer personas feel thwarted more often? Your chatbots might be mishandling or misunderstanding a certain type of customer concern or concerns from a certain type of customer. Once you identify a common thread in frustrated customer inquiries, you can feed your AI the information it needs to correct course.
3. Show the world how your AI thinks.
Transparency is perhaps just as important as diversity when it comes to building an AI system that people can trust. There are currently no laws regarding the rights of consumers who are subject to an AI algorithm’s decision-making. The least companies can do is be completely transparent with consumers about why decisions were made. Despite common industry fears, that doesn’t mean disclosing the code behind your AI.
Simply provide the criteria that the system used to reach its decisions. For instance, if the system denies a credit application, have it explain which factors went into that denial and what the consumer can do to improve his or her chances of qualifying the next time. IBM has launched a software service that looks for bias in AI systems and determines why automated decisions were made. Tools like this can aid in your transparency efforts.
The potential for bias to taint a company’s AI program is a real concern. Fortunately, there are ways to expand the diversity of your AI’s source data and weed out significant biases. By eliminating bias, you’ll help your company and society truly realize the benefits AI has to offer.
Comments are closed.