AI-generated ‘skeleton keys’ fool fingerprint scanners | Cyber Security
We’ve had fake videos, fake faces, and now, researchers have developed a method for AI systems to create their own fingerprints.
Not only that, but the machines have worked out how to create prints that fool fingerprint readers more than one time in five. The research could present problems for fingerprint-based biometric systems that rely on unique patterns to grant user access.
The research team, working at New York University Tandon and Michigan State University, used the fact that fingerprint readers don’t scan a whole finger at once. Instead, they scan parts of fingerprints and match those against what’s in the database. Previous research found that some of these partial prints contain features common to many other partial prints. This gives them the potential to act as a kind of skeleton key for fingerprint readers. They are called MasterPrints.
The researchers set out to train a neural network to create its own MasterPrints that could be used to fool fingerprint readers into granting access. They succeeded, with a system that they call Latent Variable Evolution (LVE), and published the results in a paper.
They used a common AI tool for creating realistic data, called a Generative Adversarial Network (GAN). They trained this network to recognize realistic images by feeding it lots of them. They do the same with artificially generated images so that it understands the difference between the two. Then, they take the statistical model that the neural network produces as it learns, and feeds it to a generator. The generator uses this model to produce realistic images and repeats the process so that it can get better at it.