AI is making progress, but it’s unlikely to succeed anytime soon in one key area
It will take time, but at some point every application will have its share of “AI Inside.” Today, however, we’re far from that point, and false advertising of AI capabilities isn’t helping, something Arvind Narayanan, Associate Professor of Computer Science at Princeton, has called out as “snake oil” in a recent presentation. It’s not that there aren’t real, useful ways to employ AI today, he stresses, but rather that “Much of what’s being sold as ‘AI’ today is snake oil–it does not and cannot work.”
To help parse good from bad AI advertising, where does Narayanan believe we’re making real progress in AI, and where should we myth bust?
Getting real about AI
As with any new technology, aspirations to embrace it always outpace actual production usage, and AI is no different. Even so, according to a Gartner study released earlier in 2019, 59% of enterprises surveyed are using AI today and, of that 59%, they have, on average, four AI/ML projects deployed. Gartner estimates that the average number of deployed AI/ML projects will nearly triple to 10 in 2020, double to 20 by 2021, and hit 35 in 2022. “We see a substantial acceleration in AI adoption this year,” said Jim Hare, research vice president at Gartner.
According to that same research, organizations tend to use AI/ML in the areas of customer experience (supporting decision making and making recommendations to employees, like offering up near real-time data to customer service representatives) and task automation (e.g., invoicing and contract validation in finance). These are reasonable ways to use AI, according to Narayanan.
Less reasonable are survey responses that suggest 54% of the general populace believes that AI will be able to perform “almost all tasks that are economically relevant today better than the median human (today) at each task.” As Narayanan pointed out, “AI experts have a more modest estimate that Artificial General Intelligence or Strong AI is about 50 years away, but history tells us that even experts tend to be wildly optimistic about AI predictions.”
According to Narayanan, there are two key areas where AI performs well today, the first being “Perception,” a category under which he includes:
- Content identification (Shazam, reverse image search)
- Facial recognition
- Medical diagnosis from scans
- Speech to text
- Deepfakes
In the area of Perception, Narayanan said, “AI is already at or beyond human accuracy” in the areas identified above (e.g., content identification) and “is continuing to get better rapidly.” The reason it keeps getting better, he stressed, is simple:
The fundamental reason for progress is that there is no uncertainty or ambiguity in these tasks — given two images of faces, there’s ground truth about whether or not they represent the same person. So, given enough data and compute, AI will learn the patterns that distinguish one face from another. There have been some notable failures of face recognition, but I’m comfortable predicting that it will continue to get much more accurate….
Of course, he noted, it’s that very accuracy that means we must be careful about how it’s used.
The second area where Narayanan indicates that AI performs well, though not as well as Perception, is Automating Judgment, which includes:
As he explained, “Humans have some heuristic in our minds, such as what is spam and not spam, and given enough examples, the machine tries to learn it. AI will never be perfect at these tasks because they involve judgment and reasonable people can disagree about the correct decision.” AI will continue to improve in such areas, though we’ll need to figure out the proper procedures for correcting machine-driven decisions that diverge too far from human judgment.
In these two areas, AI is imperfect but helpful, and getting better. But in the area of Predicting Social Outcomes, Narayanan bristles, AI’s role is “fundamentally dubious.”
Putting AI back in its place
In such areas, where ethical concerns get bundled up with accuracy, AI is not only a poor predictor today, but unlikely to get better anytime soon. Examples include:
- Predicting criminal recidivism
- Predicting job performance
- Predictive policing
- Predicting terrorist risk
- Predicting at-risk kids
Nor is this simply a matter of throwing more data at the problem. Using an example of predicting child outcomes based on 13,000 family characteristics, Narayanan complained that “‘AI’ [was] hardly better than simple linear formula” that used just four characteristics. Manual scoring, he continues, works better for predicting outcomes.
Furthermore, when we rely on pseudo-AI for predicting social outcomes, we run into the problem of explainability (or, rather, the inability to explain the prediction): “Instead of points on a driver’s license, imagine a system in which every time you get pulled over, the police officer enters your data into a computer. Most times you get to go free, but at some point the black box system tells you you’re no longer allowed to drive.” With no explanation why, we might enter a new, even more destructive era of road rage.
Again, this isn’t to suggest that AI isn’t a potent force for good in society it is, and AI will eventually find its way into most every application. This is a very good thing. It only becomes bad when we misapply AI to predict social outcomes, in Narayanan’s thinking, without the backstop of even being able to explain to the employee, would-be terrorist, etc. why they’re being fired, arrested, or worse.
Comments are closed.