Google Brain Built a Translator so AI Can Explain
Show Your Work
The tool, called Testing with Concept Activation Vectors or TCAV for short, can be plugged into machine learning algorithms to suss out how much they weighted different factors or types of data before churning out results, Quanta Magazine reports.
With TCAV, people using a facial recognition algorithm would be able to determine how much it factored in race when, say, matching up people against a database of known criminals or evaluating their job applications. This way, people will have the choice to question, reject, and maybe even fix a neural network’s conclusions rather than blindly trusting the machine to be objective and fair.
Google Brain scientist Been Kim told Quanta that she doesn’t need a tool that can totally explain AI’s decision-making process. Rather, it’s good enough for now to have something that can flag potential issues and give humans insight into where something may have gone wrong.
She likened the concept to reading the warning labels on a chainsaw before cutting down a tree.
“Now, I don’t fully understand how the chain saw works,” Kim told Quanta. “But the manual says, ‘These are the things you need to be careful of, so as to not cut your finger. So, given this manual, I’d much rather use the chainsaw than a handsaw, which is easier to understand but would make me spend five hours cutting down the tree.”