Body Camera Maker Weighs Adding Facial Recognition Technology | Artificial intelligence
David McNew/Getty Images
Axon, formerly known as Taser International, sparked controversy late last month when it announced the creation of an ethics board to examine the implications of coupling artificial intelligence with its line of police products.
A coalition of civil rights groups, including the NAACP, ACLU and the Electronic Frontier Foundation, immediately responded to the announcement with a public letter urging Axon to exercise caution in deploying AI technologies. The coalition wrote that American law enforcement has “a documented history of racial discrimination,” claiming, “because Axon’s products are marketed and sold to law enforcement, they sometimes make these problems worse.”
Axon leads the body camera industry, relying on the recognition of its Taser brand to secure contracts with police forces from Atlanta to Albuquerque. And it just bought out its largest competitor, Vievu, less than two weeks ago.
But as cameras get cheaper and the market gets more competitive, Axon is turning to a new revenue stream: software. Axon bundles its cameras with a suite of cloud storage and data management products that it licenses out to police forces on a subscription basis. Real-time facial recognition capability could help the company market those products, generating profitable and recurring revenue.
Critics say widespread adoption of face recognition makes it easier for police to violate citizens’ constitutional rights, including by targeting lawful protesters at large events. And the civil rights coalition argues that Axon’s ethics board isn’t representative, writing in its letter that “an ethics process that does not center the voices of those who live in the most heavily policed communities will have no legitimacy.”
NPR’s Scott Simon spoke with Axon CEO Rick Smith on the promise — and the dangers — of the new technology.
This interview has been edited for length and clarity.
On the risk of misidentifying innocent people
We agree philosophically with the issues that were raised. But it’s counterproductive to say that a technology is unethical and should never be developed. What we need to do is take a look at how this technology could evolve. What are the risks? Today, an individual officer might have to make life-or-death decisions based only on their own perceptions and prejudices. Do we think that computers getting information to those officers that could help them make better decisions would move the world in the right direction? I think the answer is unequivocally, yes, that could happen.
On claims it disproportionately misidentifies minorities
I think that has to do with the types of training data sets that have been used historically. Certainly those are one of the issues that before we developed anything to be deployed in the field, we would take a very hard look at that.
On the potential for misuse
Well, for example, there are police forces around the world that use batons and guns in very abusive ways. And yet ultimately, we know that our police, in order to do their job, need to have those same types of tools. We understand that these technologies could be used in ways that we don’t want to see happening in our society. However, it’s too blunt to say that because there is a risk of misuse, we should just write them off. We need to dig a layer deeper and understand what are the benefits and what are the risks.
On the technology’s potential benefits
You could imagine many benefits. I think we’ll see biometrics, including facial recognition technology, that properly deployed with the right oversight over the coming decades could ultimately reduce prejudice in policing and help catch dangerous people that we all agree we don’t want out in our communities and do it in a way that, at the same time, respects police transparency and rights of privacy of the average citizen.