From Chatbot to Courtroom: OpenAI Faces Tough Questions After US Tragedy

A War on Bots? OpenAI Weighs Biometric Social Network for Real-Human Verification

The420.in Staff
6 Min Read

Artificial intelligence major OpenAI is quietly exploring a radical rethink of social media — a platform designed not for maximum reach, but for verified humanity. The company is working on an early-stage social network concept that would aim to eliminate automated and fake accounts by relying on biometric proof of personhood, according to people familiar with the project.

The idea is straightforward but controversial: a social network where every account is tied to a real individual, verified at the point of entry, rather than through phone numbers, emails or behavioural signals that can be gamed by bots. Internally, the project is being discussed as a “real humans only” platform, with humans as participants rather than targets for automated engagement.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

From Face ID to iris scans

Sources say the OpenAI team has considered requiring users to verify themselves using Apple’s Face ID or an iris-scanning device known as the World Orb. The Orb captures a person’s iris pattern to generate a unique digital identifier that can confirm a user is human without relying on usernames or passwords.

The World Orb is operated by Tools for Humanity, a company founded by Sam Altman, who also serves as its chairman. If adopted, this approach would mark the first attempt by a major social platform to use biometric verification at scale.

Such a system, proponents argue, would virtually eliminate bot networks, impersonation accounts and large-scale manipulation campaigns. Critics, however, warn that biometric data is permanent and sensitive, raising serious concerns about misuse, surveillance and long-term privacy risks.

How it would differ from today’s platforms

Most mainstream social networks — including Facebook and LinkedIn — rely on indirect identity checks such as phone numbers, email addresses, network behaviour and moderation tools. None currently mandates biometric verification as a condition for participation.

OpenAI believes this gap has allowed automated accounts to flourish. By tying each account to a single human identity, the company hopes to make large-scale bot operations economically and technically unviable.

Privacy researchers caution that while the intent may be to protect online discourse, centralising biometric identity could create new vulnerabilities. An iris scan, unlike a password, cannot be changed if compromised — a fact that has made regulators and civil-liberty advocates wary of biometric systems.

X, bots and the “dead internet” concern

The project emerges against the backdrop of worsening bot activity on major platforms, particularly X. Automated accounts have been linked to cryptocurrency scams, misinformation campaigns, trend manipulation and the artificial amplification of extremist or divisive content.

Altman himself has repeatedly voiced frustration with the quality of online discourse. In recent posts, he argued that AI-driven accounts have made social platforms feel increasingly “fake”, echoing concerns associated with the so-called Dead Internet Theory — the idea that non-human activity now dominates large parts of the web.

AI-assisted content, human-verified voices

According to sources, OpenAI’s social network would still allow extensive use of AI tools for content creation, including images and videos. In that sense, it would resemble platforms like Instagram, which already integrates AI-generated media.

The distinction, however, would be identity clarity. While content might be synthetic, the speaker behind it would not be. Ensuring that users know who — or what — they are interacting with is central to the platform’s philosophy.

A tough competitive landscape

Any OpenAI-backed social network would enter a crowded market dominated by Meta’s Instagram and Threads, X, TikTok and newer entrants such as Bluesky. Each already commands massive user bases and entrenched creator ecosystems.

That said, OpenAI has demonstrated an unusual ability to drive consumer adoption. ChatGPT crossed 100 million users within two months of launch, while its video generation app Sora recorded one of the fastest early download curves in consumer AI.

Technology versus trust

Policy experts say OpenAI’s experiment could force a global debate on what identity should mean online. While a bot-free internet is widely seen as desirable, the methods used to achieve it may redefine privacy norms, digital freedom and platform governance.

Whether OpenAI proceeds or pivots, the concept underscores a growing reality: as AI floods the internet with synthetic voices, proving humanity may become the most valuable credential online.

About the author – Rehan Khan is a law student and legal journalist with a keen interest in cybercrime, digital fraud, and emerging technology laws. He writes on the intersection of law, cybersecurity, and online safety, focusing on developments that impact individuals and institutions in India.

Stay Connected