Facial recognition surveillance must be banned, says Fight for the Future
Evan Greer, deputy director of Fight for the Future, compared facial recognition to nuclear or biological weapons: while we can’t go back in time to ban the development of those technologies, we still have time to stop facial recognition before we get to the point where we’re living in what the campaign calls a nation with “automated and ubiquitous monitoring of an entire population.”
This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits. We don’t need to regulate it, we need to ban it entirely.
This is the latest campaign from the group that led a targeted internet blackout in 2015: thousands of sites blocked and redirected Congressional URLs to a Patriot Act protest page. Then, in 2017, Fight for the Future launched a last-ditch attempt to save net neutrality with its Break the Internet campaign.
Its latest call to action, BanFacialRecognition.com, offers visitors a form that connects them to their Congressional and local lawmakers in order to ask them to ban this “unreliable, biased” technology, which the group calls “a threat to basic rights and safety.”
Fight for the Future charges Silicon Valley lobbyists with “disingenuously calling for light ‘regulation’” of facial recognition so they can continue to profit from the rapid spread of this “surveillance dragnet,” thereby ducking the real debate: namely, should this technology even exist?
Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s use of facial recognition: we need an all-out ban.
The campaign includes a laundry list of the criticisms that stick to facial recognition technology like so many civil rights burrs. One of its many problems is a high error rate. For example, as the Independent reported last year, freedom of information requests show that the facial recognition software used by the UK’s biggest police force – London’s Metropolitan Police – gets false positives in more than 98% of the alerts it generated. The UK’s biometrics commissioner, Professor Paul Wiles, told the news outlet that the technology is “not yet fit for use”.
As we reported in 2017, the Met’s use of facial recognition fell flat on its face two years in a row. Its “top-of-the-line” automated facial recognition (AFR) system, which it trialled at London’s Notting Hill Carnival, couldn’t even tell the difference between a young woman and a balding man. One man was wrongfully detained after being erroneously tagged as being wanted on a warrant for a rioting offense.