The real danger of deepfake videos is that we may question everything | Artificial intelligence
FAKE videos created by artificial intelligence, known as deepfakes, are becoming incredibly convincing. They show people saying or doing things they never said or did, and recent technological leaps have made producing realistic ones easier than ever (see “AI can make high-definition fake videos from just a simple sketch”).
Although having fakes masquerade as the genuine article is a risk, it may not be the main problem. Instead it could be that with such convincing fakes around, it is easier for someone to falsely dispute the authenticity of the real deal.
A stark illustration of this can be found in the US, where possession of computer-generated images of child sexual abuse is treated more leniently by the courts than the real thing. This has resulted in a rise of the “virtual defence”: claiming illicit images are actually computer generated.
Similarly, in politics, when people are faced with something they disagree with, an increasingly common attack is to deride it as “fake news”.
The only way to fight back is to find sources of information you can trust. If you think spotting fakes is hard, try spotting reals.
This article appeared in print under the headline “The unreal deal”
More on these topics: