Deepfake challenges "will only grow"
Listen to this article
Most public attention surrounding deepfakes has focused on large propaganda campaigns; the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts.
In the new report, the authors discuss deepfake videos, images and audio as well as their related security challenges. The researchers predict the technology is on the brink of being used much more widely, including in targeted military and intelligence operations.
Ultimately, the experts make recommendations to security officials and policymakers for how to handle the unsettling new technology. Among their recommendations, the authors emphasize a need for the United States and its allies to develop a code of conduct for governments’ use of deepfakes.
What the researchers say: “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors write. “Security officials and policymakers will need to prepare accordingly.”
The leader of the study and his team previously developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes), a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
Using TREAD, they created sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. While the resulting video looks and sounds like al-Adnani — with highly realistic facial expressions and audio — he is actually speaking words by Syrian President Bashar al-Assad.
The researchers created the lifelike video within hours. The process was so straight-forward that the authors said militaries and security agencies should just assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
“Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” the authors write. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
The authors believe that state and non-state actors will leverage deepfakes to strengthen ongoing disinformation efforts. Deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more. In the short-term, security and intelligence experts can counteract deepfakes by designing and training algorithms to identify potentially fake videos, images and audio. This approach, however, is unlikely to remain effective in the long term.
“The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” the authors said. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
For long-term strategies, the report’s authors make several recommendations:
- Educate the general public to increase digital literacy and critical reasoning
- Develop systems capable of tracking the movement of digital assets by documenting each person or organization that handles the asset
- Encourage journalists and intelligence analysts to slow down and verify information before including it in published articles
- Use information from separate sources, such as verification codes, to confirm legitimacy of digital assets
Above all, the authors argue that the government should enact policies that offer robust oversight and accountability mechanisms for governing the generation and distribution of deepfake content. If the United States or its allies want to “fight fire with fire” by creating their own deepfakes, then policies first need to be agreed upon and put in place. The authors say this could include establishing a “Deepfakes Equities Process,” modelled after similar processes for cybersecurity.
So, what? This unregulatable technology is a clear danger to our democracy, and in conjunction with other sophisticated AI, perhaps to our species.
Join the discussion
More from this issue of TR
Deepfake challenges "will only grow"
“Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale."
Aggressiveness of pet dogs is influenced by life history and owner's characteristics
Historically, canine aggressiveness used to be associated solely with breed, but recent research links behavioral profiles to factors such as the age and sex of the dog, and its metabolism and hormones.
You might be interested inBack to Today's Research
How conspiracy theories emerge -- and how their storylines fall apart
With COVID-19 conspiracy theories in abundance, can artificial intelligence be used to break down the connections which glue conspiracies together?
The four causes of "Zoom fatigue" and their simple fixes
As more people are logging onto video chat platforms to connect with colleagues, family and friends during the COVID-19 pandemic, researchers have a warning for you: those video calls are likely tiring you out.
Human instinct better than algorithms in detecting online "deception"
Can we trust our gut instinct? We often assume the human brain is no match for a computer but with practice in looking for deception cues, we may be able to rely on our gut instinct when it comes to detecting fake online reviews.
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.