Deepfakes and the War on Trust

21 hours ago 6

OPINION — It started with a voice. In early July, foreign ministers, a U.S. Member of Congress, and a sitting U.S. governor received urgent messages that seemed to come directly from Secretary of State Marco Rubio. The voice messages and texts sent over Signal were authentic enough to draw a few responses, and in some cases, to spark concern. It was only later that cybersecurity teams confirmed the truth: Rubio had never sent them. His voice, his persona, even his conversational tone had been convincingly faked by artificial intelligence, a sign that the United States has entered a new era of digital deception.

The Rubio incident is no longer a rarity. It is a warning shot. Indeed, that same week a foreign president, a scientist, actors, singers, a military officer, a group of high school girls, numerous senior citizens and more were also targeted. Adversaries, whether state-sponsored or criminal, are now using hyper-realistic deepfakes to reach targets in virtually every sector of society. Unlike traditional espionage, which seeks out specific intelligence information, deepfakes aim at something even more corrosive: trust itself. They work not by stealing secrets, but by deceiving targets and leaving behind doubt.


Both Russia and the People’s Republic of China have embraced this domain with growing sophistication. Moscow’s now-infamous Doppelgänger campaign began with cloned websites and manipulated news stories to undermine support for Ukraine and fracture confidence in Western institutions. Over the past year, Russian operations have expanded to deploy AI-generated videos and audio impersonations of politicians and journalists, designed to inflame political divisions or provoke missteps.

Beijing’s approach has been quieter but no less ambitious. Its Spamouflage and Dragonbridge networks have started using AI-generated anchors and videos to seed narratives abroad, especially around contested events like Taiwan’s elections. These are precise, sophisticated influence campaigns that blend truth and deception in ways designed to slip past casual scrutiny. The line between disinformation and social engineering is dissolving before our eyes.

Other adversaries have tested the boundaries as well. Early in Russia’s war on Ukraine, a deepfake video of President Zelensky allegedly calling for surrender circulated online before it could be debunked. In 2023, Slovakia faced deepfake-driven attempts to sway public opinion during its elections. And across Europe, fabricated audio of lawmakers has been used to mislead, confuse, or embarrass. Each incident reflects the same underlying reality: the tools for deception are faster, cheaper, and more accessible than the systems we rely on to detect or prevent them.

Today, the threats from deepfakes cut across every layer of society.

Sign up for the Cyber Initiatives Group Sunday newsletter, delivering expert-level insights on the cyber and tech stories of the day – directly to your inbox. Sign up for the CIG newsletter today.

On the personal level, Americans have already begun to face a surge in non-consensual intimate imagery and AI-driven extortion schemes. A convincing voice call from a child or spouse claiming to be in danger is enough to shake any family. Criminals are exploiting the instinct to trust familiar voices, and many families are unprepared for the speed and realism of these scams.

Organizations and industries are also in the crosshairs. Financial institutions have used voice authentication for some time, but that trust can be turned against them. A fake voice message from a CEO authorizing a transfer, or a seemingly routine instruction from a senior manager, can bypass legacy security checks. Deepfake-enhanced phishing attacks are already targeting private-sector executives, and they will not remain confined to the financial sector. Any industry that relies on identity verification, whether healthcare, energy, supply chain logistics, or others, will face the same growing threat.

On the national level, the implications are profound. Deepfakes can drive wedges through an already polarized society. Imagine a synthetic video of a U.S. general announcing unauthorized troop movements, or an AI-generated call from a member of Congress confirming a fabricated scandal. Even if debunked, the damage would linger. Adversaries understand that doubt can be as powerful as persuasion, and that false narratives, repeated widely, can erode institutional credibility far faster than it can be repaired.

In this environment, where the technology is racing ahead of the response, the United States must do more to meet the challenge. Creating a convincing voice clone today requires as little as 15 seconds of audio (less than is available in the average social media clip). Realistic video fakes can be generated at machine speed, with tools available for free or at little cost. While federal agencies and private firms are developing detection methods, these systems are in a constant arms race with the next generation of generative AI models.

Unlike traditional intelligence (or even criminal) threats, deepfakes do not always aim to steal secrets. They aim to exploit the most fundamental element of democracy: our ability to know what is true. That shift makes them both difficult to detect and highly effective.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

Protecting the nation from deepfake-enabled attacks requires a response as dynamic and multi-layered as the threat itself. Technology is the first line of defense. Tools that can verify the origin and authenticity of digital media through watermarking, cryptographic signatures, and AI-powered detection must move from research labs into widespread use across government and industry. They need to be fast, interoperable, and capable of keeping pace with adversaries who can generate convincing fakes in seconds.

Yet technology alone is not enough. Americans must learn to navigate a new world where seeing and hearing are no longer believing. Public education campaigns and workplace training can help individuals recognize suspicious requests, verify information through alternate channels, and report suspected manipulation. Critical sectors, from finance to healthcare, should adopt verification protocols that assume deepfakes are in play and require multi-factor validation for key actions.

Equally important is speed in response. When a deepfake spreads, the window to limit its damage is brief. Agencies and public figures should maintain clear, pre-verified channels for crisis communication, and rapid response teams should be ready to debunk fakes and reassure the public. Taiwan’s “222” principle—debunking deepfakes within two hours, using two images and 200 words (for ease of social media sharing) offers a model for how democracies can respond effectively in the digital age.

Finally, the United States cannot face this challenge alone. Sharing threat intelligence, building common detection frameworks, and establishing international norms for the use of synthetic media will be critical to defending trust in the democratic world.

As noted, the deepfake impersonation of Secretary Rubio was not an isolated act. It is the opening move in a long campaign to corrode the foundations of public confidence. If adversaries can make Americans doubt the voices of their leaders, the authenticity of their news, or the safety of their institutions, they can achieve strategic objectives without firing a shot.

Meeting this challenge will require more than technical solutions, though technical defenses are necessary. It will demand a cultural shift to recognize that trust is now a strategic asset, and one that is under attack. By blending technology, education, policy, and international cooperation, the United States can defend that trust. And with the speed of technological advancements in generative AI, waiting to act is the worst of all options.

The era of digital deception is here, and it will not wait for us to catch up. Voices, faces, and events can be forged in seconds, and the consequences linger long after the truth emerges. Preserving trust in our institutions, our leaders, and one another is now a matter of national security. Our response will decide whether the story of this century is told in our own words or in the fictions of those who would see us divided.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

Read Entire Article






<