AI to curb the chaos of deepfakes
Artificial intelligence 'deepfake' detection is set to disrupt the chaos manipulated images and video leave in their wake.
In 2018, Buzzfeed published a video showing former US President Barack Obama insulting President Donald Trump. It quickly went viral — but all was not as it seemed. In fact, the video was a fake: its producer had used a software program to digitally alter Obama’s words, making him appear to be saying something he’d actually never said.
The result was troubling. In just one minute and 13 seconds, it showed us the future of deepfakes, a type of digital image and video manipulation that poses a startling threat to our everyday lives.
Imagine a video in which Vladimir Putin seemingly announces a nuclear strike or, closer to home, a stolen Instagram story that could be exploited to gain access to your personal accounts.
This is the future of deepfakes, which are evolving at a rapid rate. Today, synthetic identity fraud is the fastest-growing financial crime in the US.
Technologies to identify deepfakes have long struggled to catch up. Now, a UTS research project focused on AI-enabled deepfake detection is set to disrupt the chaos they leave in their wake.
AI to counter AI
“AI-enabled deepfake detection aims at automatic recognition of fake faces from real ones,” says Dr Xin Yu from the UTS School of Computer Sciences and Australian Artificial Intelligence Institute who is leading the research. Dr Yu was recently awarded a Discovery Early Career Researcher Award from the Australian Research Council.
“Our research will develop deepfake detection models that address constantly evolving deepfake techniques effectively and efficiently, assisting humans to discover and understand these counterfeits.”
Traditional deepfake detection methods tend to focus on frame-by-frame video analysis or the identification of spatial inconsistencies in still and moving images.
By contrast, Dr Yu’s work will advance the development of two emerging AI-driven techniques: 3D facial geometry modelling, and spatio-temporal and visual-audio consistency detection.
3D facial geometry modelling can identify anomalies in the faces of people depicted in videos by comparing what’s on the screen to a personalised 3D computer model of the subject’s face. Spatio-temporal and visual-audio consistency detection networks search for tiny head and facial movements that don’t align with the words being spoken in the video.
“Even if these inconsistencies appear for only one second, we can trace them,” Dr Yu says.
These techniques offer a new direction in the field of deepfake detection. And, because they’re powered by AI, they can be automated to scan huge numbers of videos, detecting and flagging potential deepfakes with no need for human intervention.
Re-training the machines
Beyond the development of the techniques themselves, a third part of the project — called data-efficient model adaptation — will focus on streamlining the process of training machine learning models to keep pace with evolving deepfake technology.
Normally, these models learn to detect potential deepfakes by being exposed to thousands of images or videos. By contrast, Dr Yu wants to develop models that can learn from just a handful of inputs, thereby drastically speeding up the development of future detection systems.
“This is a very challenging task, but it’s also the most important one,” he says.
We will investigate the characteristics of new deepfakes and then train our models to identify those characteristics.
-Dr Xin Yu
“This could be done by designing new network architecture or by creating training methods that aim to establish connections between original and new training data.”
Stopping online fraud in its tracks
The work will address a series of major security concerns that have the potential to lead to social, financial and political impacts if left unchecked. According to the Australian Strategic Policy Institute, deepfakes can ‘enhance cyberattacks, accelerate the spread of propaganda and disinformation online and exacerbate declining trust in democratic institutions.’
As such, this research offers huge opportunities for government agencies and other organisations operating in the defence, technology, and banking and finance sectors.