by Cal Morgan
In the early days of the internet, the dream was simple: connect the world, democratize information, and give every person a voice. But as artificial intelligence floods the digital landscape with content, a new kind of crisis is emerging—one that philosophers, technologists, and ethicists are calling truth decay.
It’s not about fake news or deepfakes—though those are symptoms. The deeper issue lies in the erosion of our collective sense of what’s real, as the internet becomes increasingly saturated with content that wasn’t observed, remembered, or felt by anyone at all.
The Echo Chamber Gets Automated
For decades, social media platforms and search engines have shaped our reality through algorithms that favor engagement. Now, generative AI systems are writing articles, creating videos, composing music, and simulating conversations—all trained on the vast corpus of existing internet data.
But that data now includes content produced by AI itself.
“It’s a feedback loop,” says Dr. Anika Lenz, a philosopher of technology at the University of Toronto. “AI trains on AI. Each generation moves further away from human experience, like a photocopy of a photocopy.”
The result? A digital ecosystem where probable truth—what’s most statistically likely to be true—replaces actual truth, rooted in lived experience or empirical fact.
When the Map Becomes the Territory
Jean Baudrillard, a 20th-century philosopher, warned of “hyperreality”: a condition where simulations replace reality, and the representation becomes more real than the thing it represents.
“In hyperreality, it’s not that the truth is hidden—it’s that no one even remembers it existed,” Lenz explains.
In today’s context, that could mean a generation of students reading AI-written summaries of historical events that subtly omit complexity or nuance—because it wasn’t statistically common enough in the data. Or scientists citing AI-generated research reviews that confidently synthesize false conclusions.
As generative AI becomes more fluent, more convincing, and more ubiquitous, its outputs begin to overwrite the original human-authored material they were based on. The truth becomes a remix—polished, plausible, but increasingly hollow.
Truth Becomes a Performance
Consider this: a chatbot confidently explains the causes of World War I. It sounds authoritative. It uses credible-sounding citations. But none of it is sourced from original documents. It’s a performance of knowledge, not knowledge itself.
And most users won’t question it.
“There’s a danger in conflating fluency with accuracy,” says Dr. Rishi Talwar, an AI ethicist. “The better AI gets at mimicking human tone and structure, the harder it is to notice when it’s wrong.”
Talwar compares it to the uncanny valley in robotics—where a robot looks almost human, but not quite. “Except in this case, it’s an epistemological uncanny valley. The ideas look true, but they’re not grounded in anything.”
The Human Signal Drowns
As the percentage of human-generated content shrinks, the digital commons begins to lose the raw messiness, ambiguity, and insight that comes from actual experience.
“It’s like if a library were constantly rewriting its own books, based on what readers checked out most often,” says Lenz. “Eventually, you’d get nothing but popular summaries of summaries.”
This threatens not only truth, but diversity of thought. Fringe voices, radical thinkers, and underrepresented histories—already at risk of being sidelined—may disappear altogether in a statistically-optimized content regime.
What Comes Next?
Some researchers are calling for digital provenance tools—cryptographic markers that prove content was created by a human. Others propose rethinking AI training pipelines entirely to preserve the originality of source material.
But there’s also a growing philosophical movement urging people to revalue the human. To seek out first-person stories. To resist the slick seduction of the synthetic.
“We’re not powerless,” says Talwar. “We just need to remember what it feels like to touch reality. To have a conversation with a person. To read something written by someone who bled for it.”
As AI-generated content floods the internet, truth is no longer rooted in experience or evidence, but in what’s statistically probable. The risk isn’t just misinformation—it’s a world where reality is overwritten by its own reflection. The challenge ahead isn’t just technical; it’s existential.