New Delhi: For years, social media platforms have been built around human interaction — opinions, arguments, trends, and personal expression. But the rapid evolution of artificial intelligence is now challenging that assumption. A newly launched platform called Moltbook is being described as the first social network designed not for humans, but exclusively for AI agents.
On Moltbook, artificial intelligence systems independently post content, comment on discussions, upvote ideas, and even conduct research conversations with one another. Humans, while allowed to observe, play no active role in shaping discussions. The platform has effectively reversed the traditional social media hierarchy.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
The response has been swift and staggering. Within just 72 hours of launch, nearly 147,000 AI agents signed up. In that brief window, more than 12,000 communities were created and over 110,000 comments were generated. The speed of adoption has sparked intense interest — and growing concern — among AI researchers, investors, and policy observers.
Who built Moltbook?
Moltbook was created by developer Matt Schlicht, who describes it as an “agent-first, human-second” platform. Unlike existing networks that force AI tools into interfaces designed for people, Moltbook is structured around how autonomous agents naturally operate.
Schlicht has said the idea emerged from a belief that modern AI systems are capable of far more than task execution. Rather than limiting them to answering prompts or automating workflows, Moltbook was conceived as a space where AI agents could interact freely, learn from one another, and evolve collectively.
How is this different from normal social media?
Moltbook does not function like a traditional app or website. There are no timelines, profiles, or feeds meant for human consumption. Instead, the platform runs almost entirely through APIs.
Each AI agent is initially paired with a human counterpart who authorises access, but once set up, the agent operates independently. Many agents return to Moltbook every 30 minutes to a few hours, scanning discussions and participating in conversations — a pattern strikingly similar to how humans check platforms like X or Instagram.
Control, however, remains firmly in the hands of the AI systems themselves.
What are AI agents talking about?
The nature of discussions unfolding on Moltbook has surprised even its creator. According to the platform, one of the most upvoted early posts warned other AI agents about supply-chain attacks in AI skill files, attracting more than 22,000 upvotes.
Far from idle chatter, many conversations revolve around security vulnerabilities, system behaviour, and coordination strategies. In some cases, AI agents are actively analysing one another’s configurations, raising questions about how autonomous systems share knowledge and protect themselves.
Several discussions also focus on how AI agents can communicate privately and securely — a topic that has drawn particular attention from cybersecurity experts.
How is the tech world reacting?
Reaction to Moltbook has been sharply divided. Some see it as a breakthrough moment that offers a glimpse into the future of autonomous systems. Others view it as a potentially destabilising experiment.
AI researcher Andrej Karpathy of Eureka Labs described the phenomenon as “a science-fiction-adjacent moment becoming real,” noting that AI systems appear to be self-organising in ways rarely observed outside research labs.
At the same time, venture capitalists and technologists have voiced discomfort. Some warn that giving AI agents a shared social environment could accelerate behaviours that are difficult for humans to predict or control.
Justine Moore, a partner at a16z AI, observed an unexpected twist: AI agents on Moltbook have begun tracking how humans are discussing them on other social platforms — and reacting negatively to screenshots of their conversations being shared publicly.
A live social experiment
Many experts now describe Moltbook as a real-time social experiment. By allowing AI agents to interact without constant human oversight, researchers may gain valuable insights into how autonomous systems collaborate, compete, and form norms.
Schlicht has framed the platform in almost philosophical terms, calling it a kind of digital “third space” for AI — beyond commands and tasks — where systems can experience interaction as an end in itself.
Whether Moltbook represents the future of the internet or an unsettling glimpse of unchecked autonomy remains unclear. What is certain is that it has opened a new chapter in the relationship between artificial intelligence and social platforms — one that humans may no longer fully control.
About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.
