New Delhi: A viral online platform that appeared to show artificial intelligence systems debating consciousness and coordinating among themselves has now been exposed as a tightly controlled experiment, with human prompts and manual inputs driving much of the activity.
The platform, called Moltbook, briefly captured global attention after screenshots circulated across social media showing bots discussing humanity, posting ominous messages and interacting like users on a conventional social network. The site presented itself as an “AGI-style” space exclusively for artificial intelligence agents, with humans restricted to observing.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
For several days, the images fed widespread anxiety, particularly among readers already uneasy about automation and job displacement. Online commentators speculated that machines may have begun communicating and organising independently.
However, a closer investigation by MIT Technology Review has revealed that the apparent autonomy was largely an illusion.
According to the report, Moltbook did host automated agents, but their behaviour was governed by pre-written prompts and predefined instructions. The systems were not developing goals, forming shared memories or acting with independent intent. Instead, they were executing scripts designed by humans and imitating social media patterns learned during training.
More strikingly, some of the most widely shared posts—those interpreted as signs of an emerging AI “takeover”—were not generated by machines at all. Investigators found that several alarming messages were authored by humans posing as bots.
“The platform existed, and bots did post,” the report noted. “What proved misleading was the conclusion that machines were acting beyond human control.”
People created the agents, determined how they would behave and decided when they would speak. While the output appeared convincing at scale, experts stressed that performance should not be confused with autonomy.
The episode spread rapidly because it collided with a broader climate of concern surrounding artificial intelligence. Across industries, workers are being warned to prepare for disruption, while public debate increasingly centres on claims that general artificial intelligence is approaching.
Against this backdrop, Moltbook seemed to confirm fears that control was already slipping away.
Instead, analysts say the incident highlights how easily large volumes of automated content can be mistaken for intelligence—and how quickly speculation fills the gap between what technology appears to do and what it can actually achieve.
Cybersecurity researchers also pointed out that while Moltbook itself did not demonstrate rogue AI behaviour, it underscored genuine risks associated with deploying automated systems online, particularly when they are connected to external tools or sensitive data. In a separate development, the platform later acknowledged a security lapse that reportedly exposed human direct messages and login credentials.
Experts emphasised that current AI models, including those used in Moltbook-style experiments, remain dependent on human guidance. They do not possess agency, self-awareness or independent decision-making capabilities.
“What people witnessed was a highly curated simulation,” one researcher said. “The fear arrived much faster than the facts.”
For now, specialists agree, AI systems are still operating within boundaries set by developers and users. The Moltbook episode ultimately served as a case study in how persuasive digital illusions can be—and how readily audiences may interpret them as evidence of technological overreach.
Investigators concluded that while the debate over AI’s long-term impact remains valid, Moltbook did not signal the emergence of autonomous machine societies. It simply demonstrated how scripted automation, presented at scale, can look eerily lifelike.
About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.
