IBM’s New AI Does Something Amazing: It Learns From “Memories”

Continual Learning

When an AI algorithm a new skill say a video game like StarCraft II it can get good enough to topple the best human pros.

But that’s only true if everyone plays by the rules. Change the parameters of the game, and the AI will find itself totally unable to adapt. AI that excels at the game Pong can’t handle even the slightest shift in distance between the two paddles.

Now, new IBM research set to be presented at an AI conference in May could change that. The tech company says it’s built an algorithm that can learn on the fly, leveraging something resembling a virtual memory to adapt to a changing environment without needing to be trained from scratch.

When it was playing a game of Flappy Bird, for instance, the algorithm could continue to play even as the distance between pipes and obstacles kept changing, according to the research paper recently shared online by the IBM-Watson AI lab a remarkable example of flexible reasoning by a next-generation AI, and perhaps a sign of things to come.

Clean Slate

The end goal of projects like this is to build artificial general intelligence, or the sort of human-like or super-human-like  AI from science fiction. This new research isn’t there yet, but it does enable AI to learn in a more humanlike fashion by mimicking the brain’s flexibility and ability to update its knowledge base over time.

IBM scientist Matt Riemer describes how his team’s research tackles the problem of forgetful AI in an unpublished blog post that hasn’t yet undergone internal review at IBM that was reviewed by Futurism. Typically, an algorithm will fall prey to what’s called “catastrophic forgetting” wherein AI forgets its previous training, wiping the slate clean and forgetting all previous training as soon as they’re trained on a new task.

Other scientists have tackled catastrophic forgetting. One team from Google’s DeepMind even built an algorithm with a semblance of an imagination that allowed it to better store “.” This new research tackles a similar problem as the DeepMind research, but from a different angle, IBM spokesperson Fiona Dohertytold Futurism.

Gradual Transfer

But Riemer wrote that keeping an algorithm from forgetting isn’t as good as making AI that can adapt and learn new things.

“However, approaches that only consider stabilizing continual learning by reducing ‘forgetting’ are only looking at half the picture, so it is easy to construct domains where they fail,” Riemer wrote in the blog.

The main difference is that IBM’s team found a way to train AI so that when it encounters changes to its environments, whether it’s the distance between pipes in Flappy Bird or anything else, its existing knowledge and training transfers to that new task instead of interfering with it.

Ultimately, Riemer writes, the goal is to create AI that can go do its own thing, learning and adapting without needing a human to supervise and hold its hand along the way.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More