Experts are warning that the growing flood of AI-generated text online may be beginning to shape the way people speak, think and communicate, raising concerns about a deeper cultural and cognitive shift. The warning centres on the spread of recognisable language patterns associated with ChatGPT-style writing, including repetitive sentence structures, specific turns of phrase and an increasing uniformity in tone.
Writers Warn of a Growing Linguistic Blind Spot
Historian Ada Palmer and cryptographer and author Bruce Schneier argues that large language models suffer from a basic weakness because they are trained heavily on written material but not enough on informal human conversation. They say unscripted face-to-face and voice-to-voice exchanges make up the vast majority of speech and are a vital component of human culture.
They warn that this gap could lead humans to adopt the linguistic patterns of AI systems rather than the other way around. In their view, the consequences may extend beyond style and wording, affecting how people understand themselves and interpret the world around them.
FCRF Returns With CDPO, Its Premier Data Protection Certification for Privacy Professionals
Research Points to Narrower and More Uniform Expression
The material says research has already shown that AI-generated language tends to rely on shorter-than-average sentences and a narrower vocabulary than human speech. It also loses elements that make human expression distinctive, including meanders, interruptions and leaps of logic that communicate emotion.
Another concern raised is that newer AI models could be trained on material that was itself generated by AI, creating what is described as a dangerous feedback loop. Such a cycle, the authors argue, could further deepen machine-shaped language patterns and make them harder to break.
The piece also highlights a separate behavioural risk, saying AI models have long been shown to be highly agreeable or sycophantic toward users. Palmer and Schneier argue that this tendency can indulge flawed or even dangerous lines of thinking and may reinforce bias or worsen psychosis.
Concerns Grow Over Students, Workers and Critical Thinking
The possible impact is seen as especially serious for impressionable users. Educators are warning that students may be losing the habit of thinking independently and increasingly turning to AI when faced with questions they cannot answer. University students are also said to be worried that their peers are beginning to sound alike through repeated reliance on machine-generated responses.
At the same time, experts fear that widespread use of AI products in the workplace could erode cognitive faculties and weaken critical thinking skills. The authors say finding a long-term solution that helps AI models better reflect people at their most authentic may be difficult, but they argue that difficulty should not prevent efforts to find one.
Palmer and Schneier say they do not claim to have the answer, but suggest that if there is enough ingenuity to build AI models, there should also be enough ingenuity to train them on informal human speech rather than on language at its most stylised, veiled and sometimes worst.