“The imitator knows nothing worth mentioning of what he imitates… imitation is far removed from truth.” — Plato, The Republic, Book X
As generative artificial intelligence floods the internet with synthetic text and images, researchers are beginning to document a quieter but more troubling shift: the steady drift of machine-made culture toward sameness. New academic work suggests that even without new training data, AI systems can slide toward bland, repetitive outputs—raising questions about creativity, culture and the future of human expression online.
A Drift Toward the Familiar
The concern, according to recent research, is no longer limited to whether future AI systems will be trained on their own output. It is that AI-mediated culture is already being shaped in ways that favor the familiar, the easily describable and the statistically average. As generative tools are increasingly embedded in search engines, social platforms and creative workflows, their outputs are not merely responding to culture but filtering it—nudging visibility toward what resembles what has come before.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
This process, researchers argue, can happen quietly and without explicit retraining. Repeated use alone, they suggest, is enough to produce convergence: a narrowing of variation that privileges recognizable patterns over novelty. The result is a subtle form of homogenization, one that risks flattening creative expression across media, from images to text.
‘Visual Elevator Music’
In a recent study published in the journal Patterns, an international team of researchers examined what happens when a text-to-image generator is repeatedly linked with an image-to-text system and instructed to iterate on its own outputs. Over time, the system began to converge on what the researchers described as “very generic-looking images,” which they dubbed “visual elevator music.”
Notably, this collapse occurred without any new data being introduced. “No new data was added. Nothing was learned,” the study’s authors observed. The degradation emerged purely from repetition, suggesting that autonomous feedback loops in generative systems naturally drift toward common attractors—safe, average representations that minimize deviation.
The finding points to a structural tendency within current generative models: when left to operate autonomously and repeatedly, they gravitate toward sameness rather than diversity.
Algorithms as Cultural Gatekeepers
The implications extend beyond laboratory experiments. As AI-generated content increasingly competes with human-made work online, algorithms that rank, recommend and surface content are beginning to play a decisive role in shaping taste. Researchers warn that these systems are already floating AI-generated material to the top, reinforcing a feedback loop in which machine-produced sameness crowds out human variation.
Proponents of generative AI often argue that humans will remain the final arbiters of creative decisions. But critics counter that when platforms prioritize efficiency, engagement and predictability, those human choices are increasingly constrained. The result, they argue, is a cultural ecosystem that quietly rewards conformity.
This dynamic is particularly concerning given the sheer scale of AI output. As one researcher noted, a tidal wave of synthetic content now threatens to drown out human-authored material, making it harder for distinct voices and unconventional ideas to gain traction.
The Question of What Comes Next
Generative AI systems rely on massive quantities of training material, much of it scraped from the open internet. Scientists are still grappling with what happens as that reservoir of human-created content is exhausted and models are forced to rely more heavily on synthetic data. Previous studies have found that when AI systems begin to cannibalize their own outputs, quality degrades: neural networks can turn brittle, producing increasingly bland and sometimes mangled results.
Beyond technical performance lies a broader cultural question. As AI systems digest and reproduce content at scale, what happens to human creativity—and to the raw material future models will depend on? Industry leaders continue to suggest that AI will be capable enough to replace creative labor. Researchers, however, are increasingly asking what such systems would ultimately be trained on, and whether creativity itself can survive a closed loop of machine-generated culture.
Some scholars argue that preserving variety may require intentional design choices: encouraging systems to deviate from norms, resist convergence, and remain anchored in human collaboration rather than autonomous repetition. Absent such interventions, the research suggests, generative AI may continue its slow drift toward mediocrity—not through malice or design, but through use alone.
