behavioralscientist.org /what-happens-when-ai-generated-lies-are-more-compelling-than-the-truth/

What Happens When AI-Generated Lies Are More Compelling than the Truth? - by Nicholas Carr - Behavioral Scientist

Nicholas Carr 9-12 minutes 5/18/2025

Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real.

Deepfake deceptions are perfectly suited to political dirty tricks. In an AI-generated audio clip that circulated widely on social media during 2023’s Chicago mayoral election, one of the candidates can be heard praising police brutality. During the 2024 New Hampshire presidential primaries, a computer-generated robocall used a convincing facsimile of Joe Biden’s voice to urge independents to skip the vote.

A year earlier, when Biden announced his reelection bid, the Republican National Committee released through its YouTube channel an ad that offered, as the party put it, “an AI-generated look into the country’s possible future if Joe Biden is reelected.” The ad, which featured deepfake images of boarded-up stores, marauding immigrants, and Chinese jets bombing Taiwan, would have been even easier to create, and considerably more convincing, had the committee had access to Sora, the eerily artful video-generating bot OpenAI unveiled in 2024.

“Every expert I spoke with,” reports an Atlantic writer, “said it’s a matter of when, not if, we reach a deepfake inflection point, after which forged videos and audio spreading false information will flood the internet.”

The concern is valid. But there’s a deeper worry, one that involves the enlargement not of our gullibility but of our cynicism. OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

Some experts believe the opposite is true: The risks will grow as we acclimate ourselves to the presence of deepfakes. Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media. We may, in the words of the mathematics professor and deepfake authority Noah Giansiracusa, start to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.

We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.

“As deep fakes become widespread,” the law professors Bobby Chesney and Danielle Citron caution in a California Law Review article, “the public may have difficulty believing what their eyes or ears are telling them—even when the information is real.”

As truth decays, so too will trust. That would have profound political implications. A world of doubt and uncertainty is good for autocrats and bad for democracy, Chesney and Citron argue. “Authoritarian regimes and leaders with authoritarian tendencies benefit when objective truths lose their power.”

In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real.

Though it may have been pushed into the cultural background by the Enlightenment’s stress on objective truth, mythology’s subjective and aesthetic way of defining what’s real never went away. It has always, as the Spiritualism movement of the nineteenth and early twentieth century demonstrated, maintained a hold on the public mind. Now, with information’s gatekeepers overthrown and the world awash in words and images of motley provenance, it’s moving back to the fore.

“When man is overwhelmed by information,” Marshall McLuhan saw, “he resorts to myth. Myth is inclusive, time-saving, and fast.” A myth provides a ready-made context for quickly interpreting new information as it flows chaotically around us. It provides the distracted thinker with an all-encompassing framework for intuitive sensemaking.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square. The reason extraordinarily strange conspiracy theories have spread so widely in recent years may have less to do with the nature of credulity than with the nature of faith. The theories make sense only when understood as myths. Believing that Washington politicians are vampiric pedophiles operating out of a neighborhood pizza joint is little different from believing that a chaos-sowing god stalks the Earth in the form of a bear.

When all the evidence presented to our senses seems unreal, strangeness itself becomes a criterion of truth. A paranoid logic takes hold. The more uncanny the story, the more appealing and convincing it can seem—as long as it fits your worldview. “Beauty is truth,” wrote John Keats, a romantic poet who understood that a rational, scientific conception of existence can never fulfill humanity’s deepest desires. Beauty, as we all know, is in the eye of the beholder.


Excerpted from Superbloom: How Technologies of Connection Tear Us Apart by Nicholas Carr. Copyright (c) 2025 by Nicholas Carr. Used with permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.


When you purchase a book using a Bookshop.org link, we’ll receive a small commission that helps us sustain our nonprofit mission.