Huge flaw found in how facial features are measured from images

How is it that our brains – the original face recognition program – can recognize somebody we know, even when they’re far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us?

Cognitive experts say we do it by learning a face’s configuration – the specific pattern of feature-to-feature measurements. Then, even as our friends’ faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically “corrects” face shape… At least, it does when we’re already familiar with how far apart our friends’ features are.

But according to Dr. Eilidh Noyes, who lectures in Cognitive Psychology at the University of Huddersfield in the UK, the ease of accurately identifying people’s faces – enabled by our image-being-tweaked-in-the-wetware perceptual constancy – falls off when we don’t know somebody.

This also means that there’s a serious with recognition systems that use what’s called anthropometry: the measurement of facial features from images. Given that the distance between features of a face varies as a result of the camera-to-subject distance, anthropometry just isn’t a reliable method of identification, Dr. Noyes says:

People are very good at recognizing the faces of their friends and family – people who they know well – across different images. However, the science tells us that when we don’t know the person/people in the image(s), face matching is actually very difficult.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More