A learning bias found in kids could help make A.I. technology better
The theory behind machine learning tools that are like neural networks is that they function and, more specifically, learn in a similar way to the human brain. Just as we discover the world through trial and error, so too does modern artificial intelligence. In practice, however, things are a bit different. There are aspects of childhood learning that machines can’t replicate and they’re one of the things which, in many domains, make humans superior learners.
Researchers at New York University are working to change that. Researchers Kanishk Gandhi and Brenden Lake have explored how something called “mutual exclusivity bias,” which is present in kids, could help make A.I. better when it comes to learning tasks like understanding language.
“When children endeavor to learn a new word, they rely on inductive biases to narrow the space of possible meanings,” Gandhi, a graduate student in New York University’s Human & Machine Learning Lab, told Digital Trends. “Mutual exclusivity (ME) is a belief that children have that if an object has one name, it cannot have another. Mutual exclusivity helps us in understanding the meaning of a novel word in ambiguous contexts. For example, [if] children are told to ‘show me the dax’ when presented with a familiar and an unfamiliar object, they tend to pick the unfamiliar one.”
The researchers wanted to explore a couple of ideas with their work. One was to investigate if deep learning algorithms trained using common learning paradigms would reason with mutual exclusivity. They also wanted to see if reasoning by mutual exclusivity would help learning algorithms in tasks that are commonly tackled using deep learning.
To carry out these investigations, the researchers first trained 400 neural networks to associate pairs of words with their meanings. The neurals nets were then tested on 10 words they had never seen before. They predicted that new words were likely to correspond to known meanings rather than unknown ones. This suggests that A.I. does not have an exclusivity bias. Next, the researchers analyzed datasets which help A.I. to translate languages. This helped to show that exclusivity bias would be beneficial to machines.
“Our results show that these characteristics are poorly matched to the structure of common machine learning tasks,” Gandhi continued. “ME can be used as a cue for generalization in common translation and classification tasks, especially in the early stages of training. We believe that exhibiting the bias would help learning algorithms to learn in faster and more adaptable ways.”
As Gandhi and Lake write in a paper describing their work: “Strong inductive biases allow children to learn in fast and adaptable ways … There is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.”
Comments are closed.