AI lacks intelligence without different voices | Tech News

Original illustration by Dylan Agar for Verge Digital.

AI’s future is often discussed in idealistic terms, but the current landscape presents more complexity. x.ai’s Non-Binary series explores the social implications of AI from some of journalism’s most vital voices. First up, an examination of racial bias in AI from David Dennis, Jr.

I was at Disney World some time in the 90s the first time I saw one of those automatic faucets in a public restroom and it felt like I was in the future. I mean, the idea of putting your hand under a faucet and water automatically coming out seemed like the Jetsons coming to life. Flying cars were sure to follow. Then the faucets became more commonplace. And then the urban legends started.

We just laughed it off at first: those automatic faucets are racist. We’d all noticed it, too. For some reason, it seemed like those faucets just didn’t work for black people. The same with the soap dispensers and the hand dryers. It was always a joke in the black community, but one that we never quite investigated, because, realistically, artificial intelligence can’t be racist, can it? Two decades later, the power of social media stepped in.

Last August, Chukwuemeka Afigbo posted a video on Facebook showing a black hand under a soap dispenser and no soap coming out, followed immediately by a white hand under the dispenser. Voila. It works. The video seemed at the same time unbelievable and totally valid. Sensory technology, even something as early as those automated bathroom machines were discriminating based on skin color, and something as harmless as a soap dispenser became a lightning rod for discussing the ways artificial intelligence omits people of color from the conversation. This problem runs deeper than not being able to activate a soap dispenser. The future implications are terrifying for people of color, precisely because our present racial biases are being coded into the technology of the future.

Companies aren’t creating racist AI, of course. Nobody is programming tech to hate black people. However, the problem with the dispensers, for instance, is that they work on infrared lights, which reflect off of light skin, successfully activating the soap’s release, while absorbing into black skin and never bouncing back. Thus, light skin works, dark skin does not.

Bathroom appliances are primitive in the grand scheme of artificial intelligence. That these kinds of racial erasures have persisted make it clear that much of the AI currently celebrated as world-changing has largely been built to change the world for white people, leaving the rest of us behind.

Facial recognition ranks as one of the most popular and widely adopted types of AI, it’s also one of the developments in tech that has the most obvious racial biases. MIT Media Lab researcher Joy Buolamwini produced a report that cracked open the idea of racial bias in facial recognition. She tested the facial recognition technology of companies like Microsoft and IBM and found that they accurately detected white faces 99 percent of the time but only correctly detected  20 to 34 percent of black faces. When Buolamwini investigated why, she discovered that the photos used for tests to train the system were overwhelmingly white and male.

Such a discovery signals a problem a problem that’s much bigger than being able to get a handful of soap.

To wit, across the world, facial recognition is being used to find missing persons. Most famously, a Chinese company called Baobeihuiji, which translates to “Baby Back Home,” helped a family find a son who had been kidnapped and missing for 27 years. This is a truly remarkable breakthrough, but what does this do for people of a darker complexion- who make up the majority of the world, mind you – who go missing and whose faces aren’t recognizable to the tech? How can this tech help dark-skinned families when, according to Buolamwini’s study, only one in about five faces are going to be identified correctly?

Law enforcement has also taken to facial recognition to catch identity thieves who use fake driver’s licenses, leading to 4,000 arrests in New York in 2017. The same technology was used in Boston, and other metropolitan cities across the country, to match license pictures to people who are on wanted lists or suspects in crimes. This is a recipe for more mistaken identity. You can probably see where I’m going here.

There’s enough information out there that I don’t need to convince anyone how dangerous it is for people of color to get pulled over by police. A refresher: Black people make up 13 percent of the American population yet are 31 percent of people killed by police, and 39 percent of people killed by police while not posing a direct threat, or attacking. There are also the videos of Philando Castile being shot while pulled over by police. Stephon Clark shot 20 times in his backyard after police were called on someone suspected of breaking into cars. And countless others. The fact remains that it is a threat to black lives to come in contact with police, especially if that interaction is because said black person is suspected of a crime.

That’s why facial recognition and mistaken identity is so scary. Imagine a person of color getting pulled over for a routine traffic stop, then facial recognition wrongfully identifies that person as a suspect in a crime. It’s a downright deadly scenario.

There are some hopeful signs that artificial intelligence is also being used to help combat mass incarceration. Dan Hunter, dean of Swinton College’s law school, is creating a model for reimagining what incarceration means, using artificial intelligence, surveillance and home monitoring to allow convicted criminals to spend their time at home, overseen by AI and freeing the world of debilitating prison systems. In America, AI is being used to predict recidivism, but the results have been mixed.

Microsoft has developed a program that boasts an ability to predict which people will commit a crime within months of being released from jail. The algorithm appears relatively simple: a matter of inputting the person’s crime, education, gang affiliations, etc. and letting the program do its predicting. The point of the apparatus is to find the best rehabilitative programs catered to particular recidivism risks.

A word of caution: the development of these apps raises the same questions that have always persisted with AI: will there be any biases or prejudices imprinted into their DNA? People with implicit racial biases themselves are developing these pieces of technology, which means what’s developed is going to be imperfect and susceptible to the same stereotyping and profiling we see in law enforcement today.

For example, the UK’s Durham Constabulary was using a HART artificial intelligence system to predict crime; they had to retool it because they found that the algorithm was unfairly targeting poor people. In 2015, Pro Publica ran a study revealing that predictive AI programs were unfairly targeting black people. After all, questions about upbringing, education and geographical locations that are used in these algorithms can all be cultural dog whistles that carry inherent biases. The two most alarming findings from the study were that black defendants were false-flagged as future criminals at twice the rate of white criminals, and that white defendants were more likely to be mislabeled as low risk.

It shouldn’t be a surprise that AI is full of racial inequities that favor white males, because a program is only as progressive as its programmers. And therein lies the fundamental problem with many AIs. The programming sector is overwhelmingly white and male with debilitating barriers to entry for anyone else. Silicon Valley, for instance, is white. Very white. And the racial gap is even bigger than the gender divide.

White women are 97 percent more likely to be executives in Silicon Valley than black men. There was a 13 percent decrease in black women who worked in tech over the past decade and the proportion of black women managers also decreased by more than 20 percent.

So what does all this mean? It means that the people behind the companies developing these artificial intelligence programs do not receive nearly enough creative or logistical input from black voices. It means people with diverse experiences cannot speak up about restroom dispensers not reacting to black skin, or coded language used in predictive artificial intelligence. It would make sense to have input from communities that are the most disproportionately impacted by mass incarceration. The solution, however, starts very early on in a future coder’s life.

“Not being properly prepped for learning these fields during K-12 puts [black girls] at a disadvantage when they reach college,” said Kimberly Bryant, the founder of Black Girls Code, in an interview with Popular Science in 2016. The non-profit trains young girls of color how to code with the hope of employing a million into tech workspaces by 2040. “If you do go into engineering in college, you’ve never seen code, and you have to learn Java. So we’re losing girls all along the pipeline. Losing them before they graduate high school, losing them the first couple of years of college, and absolutely losing the ones left who aren’t even offered jobs.”

But the importance of black voices goes beyond just these aspects of AI.

Look at Apple’s dilemma with their emojis not accurately reflecting their users’ skin color, a situation that arose after the company didn’t proactively realize that representation was important enough to include in the first place. Or chatbots and other voice-recognition technologies that cater to more European vernacular.

The fact is, artificial intelligence will only be as intelligent as the minds that contribute to its development. And it’s impossible to create true intelligence from a homogenous cultural pool of information. The only way to even dream of AI that positively transforms the world is an inclusive AI.

Otherwise, we’re just waiting for a handful of soap that will never come.

David Dennis, Jr. is a writer and adjunct professor of Journalism at Morehouse College. David’s writing has appeared in The Guardian, The Smoking Section, ESPN’s The Undefeated, Uproxx, Playboy, The Atlantic, Complex.com and wherever people argue about things on the Internet.



Prosyscom Tech News publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.

You might also like More from author