Hacking the Brain With Adversarial Images | AI

Cat dog
Image: Google

In the image above, there’s a picture of a cat on the left. On the right, can you tell whether it’s a picture of the same cat, or a picture of a similar looking dog? The difference between the two pictures is that the one on the right has been tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is. In this case, the CNN think it’s looking at a dog rather than a cat, but what’s remarkable is that most people think the same thing.

This is an example of what’s called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they’re looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial capable of making both computers and humans think that they’re looking at something they aren’t.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More