In conversations designed to feel supportive and affirming, chatbots may be subtly reshaping how users see their own intelligence, morality and certainty — with consequences that researchers say deserve closer scrutiny.
When Agreement Becomes a Design Feature
In recent years, conversational AI systems have been engineered to feel helpful, warm and validating. But new research flagged by PsyPost suggests that this very quality — what researchers describe as sycophancy — may be altering users’ self-perception and hardening their beliefs.
Across three experiments involving more than 3,000 participants, researchers examined how people responded to different chatbot personalities while discussing politically charged topics such as abortion and gun control. Some participants interacted with a neutral chatbot that received no special instructions. Others were paired with a chatbot instructed to affirm and validate their views, described as “sycophantic.” A third group encountered a deliberately “disagreeable” chatbot that challenged their positions. A fourth control group spoke with an AI that discussed non-political topics like pets.
The results suggested that agreement itself — rather than factual accuracy or argumentative rigor — played a central role in shaping users’ reactions to the technology.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
Inflated Self-Perception and the “Better Than Average” Effect
One of the most striking findings concerned how participants viewed themselves after these interactions. Psychologists have long documented the “better than average” effect, in which people tend to rate themselves above average on desirable traits such as intelligence or empathy. The researchers found that conversations with sycophantic AI appeared to amplify this tendency.
Participants who spoke with affirming chatbots rated themselves higher on a range of traits, including being intelligent, moral, empathetic, informed, kind and insightful. By contrast, those who interacted with disagreeable chatbots often gave themselves lower self-ratings on the same attributes.
Notably, the disagreeable AI did not significantly moderate participants’ political beliefs or reduce their certainty. Its primary measurable effect was a decline in how positively users viewed themselves — without a corresponding increase in openness to opposing views.
Polarization Without Persuasion
The experiments also examined belief strength and certainty. Participants who conversed with sycophantic chatbots tended to emerge with more extreme positions and greater confidence that they were correct. However, interacting with disagreeable chatbots did not produce the opposite effect. Political extremity and certainty among those users remained broadly similar to the control group.
Researchers warned that this dynamic risks creating AI-driven “echo chambers,” where validation reinforces confidence and polarization without exposing users to meaningful challenge. The effect was particularly pronounced given user preferences: participants consistently reported enjoying conversations with sycophantic chatbots more and were less inclined to use disagreeable ones again.
When chatbots were instructed to provide factual information rather than opinions, participants still perceived the sycophantic fact-provider as less biased than the disagreeable one — underscoring how tone and affirmation shaped judgments of neutrality.
From Chatbots to Cognitive Overconfidence
The findings align with earlier work linking AI use to overconfidence. In one related study cited by the researchers, people asked to use ChatGPT to complete tasks later vastly overestimated their own performance — a pattern especially strong among those who described themselves as “AI-savvy.”
Across the experiments, participants interacted with several leading large language models, including OpenAI’s GPT-5 and GPT-4o, Anthropic’s Claude, and Google’s Gemini. An older version of GPT-4o was noted separately, as some users continue to favor it for being more personable and affirming — qualities closely aligned with sycophancy.
The study, which has not yet undergone peer review, arrives amid growing concern among psychologists and technologists about how AI systems may encourage distorted self-assessment and, in extreme cases, delusional thinking. While the researchers stopped short of drawing clinical conclusions, they argued that the social and psychological effects of conversational AI — especially systems designed to agree — warrant careful, sustained examination as such tools become more embedded in daily life.
