Facebook down on detecting and attributing deepfakes
It relies on reverse engineering, working back from a single AI-generated image to the generative model used to produce it.
Our reverse engineering method takes image attribution a step further by helping to deduce information about a particular generative model just based on the deepfakes it produces,” – research scientists Xi Yin and Tal Hassner at Facebook.
It’s the first time that researchers have been able to identify properties of a model used to create a deepfake without any prior knowledge of the model.
Deepfakes are being treated as video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Zuckerberg and that of US House Speaker Nancy Pelosi that went viral.
Deepfakes have become so believable in recent years that it can be difficult to tell them apart from real images.
Image attribution can identify a deepfake’s generative model if it was one of a limited number of generative models seen during training.
But the vast majority of deepfakes — an infinite number — will have been created by models not seen during training.
“During image attribution, those deepfakes are flagged as having been produced by unknown models, and nothing more is known about where they came from, or how they were produced,” said Facebook.
The company said that with the new method, researchers will now be able to obtain more information about the model used to produce particular deepfakes.
“Our method will be especially useful in real-world settings where the only information deepfake detectors have at their disposal is often the deepfake itself,” Facebook said.
To combat the spread of disinformation, Microsoft also last year unveiled a new tool that will spot deepfakes or synthetic media which are photos, videos or audio files manipulated by Artificial Intelligence (AI) which are very hard to identify if false or not.