UvA and NFI launch study into the recognition of deepfakes
The University of Amsterdam (UvA) and the Netherlands Forensic Institute (NFI) started an investigation into the recognition of deepfakes, as well as, hidden messages from criminals.
With deepfake technology, it is possible to impersonate someone in pictures or videos to the point where the viewer does not realize they are looking at a fake. “It is almost impossible to distinguish real from deepfake videos with the naked eye”, said Professor of Forensic Data, Zeno Geradts.
Criminals are increasingly making use of deepfakes. Last month, Dutch, British and Baltic MPs had a conversation with a deepfake imitation of Russian opposition leader, Alexei Navalny. The politicians only realized the deception weeks after their talk occurred.
Deepfakes are also used, for example, to hide the faces of adults in child pornography or to blackmail someone by showing them manipulated images of a kidnapped child.
Current computer models are able to recognize deepfakes eight out of ten times. “You would ideally want at least 99.5 percent of deepfakes to be removed”, said Geradts.
In addition, the research institutes are also analyzing how to improve detection of hidden messages by criminals. Criminals can, for example, let their partners know when and where a drug shipment will arrive via a secret message in a video.
“An old-fashioned example of a hidden message might be that the first letter of words in a sentence collectively form a new word. Nowadays, videos with hidden messages can be digitally crafted”, Geradts explained. The scientists are now looking into how computer systems can improve their recognition of secret messages.
A third part of the research will also focus on improving speech recognition, for example by combining someone’s voice with the location data of their phone.