AI on the rise in policing to predict crime and uncover lies | Cyber Security
Latest breaking news on Cyber security
Tom Cruise, we’re still playing catch up. Cyber policing isn’t exactly at the level of Minority Report – yet – but it’s getting there.
One item of interest from the Pre-Crime Department: in spite of being criticized for over-policing of already heavily surveilled minority communities, an artificial intelligence (AI) platform is being trialled in police departments around the US that promises to predict where crime is going to occur.
The tool comes from a company called PredPol that claims that the software can algorithmically predict crime. As its training manuals show, the notion is based on the broken-windows approach to policing: a strategy of issuing citations for petty crime that doesn’t actually reduce crime but has been shown to damage the relationship between police and communities.
As Motherboard reported in June, PredPol says its software can predict which crimes will happen in areas as small as 500×500 feet, based on historical crime data. That data is fed into an algorithm that spits out predictions of where similar crimes will occur next.
Digital rights advocates say that predictive policing is inherently biased because the data used to make crime predictions is based on years of biased policing strategies that over-criminalize certain neighborhoods: Jake Ader, a contributor to the digital rights group Lucy Parsons Labs, filed a freedom of information request to find out how police in Elgin, Illinois, are using the tool. That’s how he found out that its training manual relies on the much-criticized broken-windows policing strategy.
He told Motherboard that the use of predictive policing is spreading through police departments – including in New York City, Los Angeles and other, smaller cities.
Fast-forward a few months to this week, and the BBC reports that PredPol is digging itself into police departments across the US, with more than 50 departments now using it, as well as a handful of forces in the UK. Kent Constabulary, for one, claims that street violence fell by 6% following a four-month trial.
Steve Clark, deputy chief of Santa Cruz Police Department in California:
We found that the model was just incredibly accurate at predicting the times and locations where these crimes were likely to occur.
Those working in this space – besides PredPol, which stands for Predictive Policing, other companies include Palantir, CrimeScan and ShotSpotter Missions – say that the AI version of predictive policing beats traditional hot spot analysis, which involves reacting to whatever happened in an area previously, as opposed to anticipating what’s likely to happen in the future.
PredPol co-founder and anthropology professor Jeff Brantingham says that AI and machine learning can spot patterns that are too subtle for humans to pick up on:
Machine learning provides a suite of approaches to identifying statistical patterns in data that are not easily described by standard mathematical models, or are beyond the natural perceptual abilities of the human expert.
Maybe so. Still, studies don’t show that predictive policing has shown any results to brag about. John Hollywood, an analyst at policy research institution Rand Corporation, says that recent advances in analytical techniques have produced only “small, incremental” improvements in crime prediction. We’re talking about results that are 10-25% more accurate than traditional hot-spot mapping, he says:
Current technologies are not much more accurate than traditional methods.
It is enough to help improve deployment decisions, but is far from the popular hype of a computer telling officers where they can go to pick up criminals in the act.
But wait, there’s more: besides raising the suspicion of digital rights advocates and failing to impress Rand analysts, PredPol also managed to expose login pages for 17 US police departments on Tuesday morning, something it seems they failed to predict.
Police are using yet another AI tool to augment their human wetware. It’s called VeriPol: software that’s using text analysis and machine learning to identify fake police reports. Computer scientists at Cardiff University and the Charles III University of Madrid claim that VeriPol can identify false robbery reports “with over 80% accuracy.”