As governments push websites to verify users’ ages through facial scans and ID uploads, data breaches at platforms like Discord and Tea reveal the mounting privacy costs of online safety laws.
A Digital Gatekeeper With Hidden Costs
Across the internet, websites are increasingly asking users to prove their age — not just with a checkbox or a birthdate, but with facial scans, photo IDs, or even AI-driven video verification. The move, driven by sweeping online safety laws, is meant to protect children and restrict access to harmful content. But the process has also created a new kind of vulnerability: vast repositories of sensitive personal data that are proving irresistible to hackers.
In October 2025, Discord, the social media and gaming platform, disclosed a breach that exposed photo IDs of around 70,000 users worldwide. The company said the data had been accessed through a third-party provider. Only a few months earlier, in July, a similar breach hit Tea — a women’s safety app that relies on photo selfies and ID verification. Hackers reportedly leaked users’ photos along with chat content and messages.
The incidents, though separated by context and purpose, reveal the same fault line: a system meant to ensure trust online is instead testing the limits of data privacy and security.
The Anatomy of an Age Check
Most websites now employ one of several verification methods. AI-powered systems can estimate a user’s age from a selfie, while more traditional models require users to upload scans of official documents like passports or driver’s licenses. Some platforms even ask for a verified credit card.
The verification process itself isn’t inherently flawed — but the scale and sensitivity of the data it collects make it a prime target. Cybercriminals view these databases as treasure troves of high-value personal information that can be repurposed for identity theft, fraud, and even deepfake creation.
Discord, for instance, has said on its support site that it “does not permanently store personal identity documents or video selfies,” and that all verification data is deleted once age is confirmed. Yet, the breach raises uncomfortable questions about whether such assurances hold up in practice, especially when third-party services are involved.
A Brief Introduction about Prof. Triveni Singh, PhD | Ex-IPS | FCRF| FutureCrime Researcher
When Safety Tools Become Security Risks
The recent hacks underscore how complex — and fragile — the modern data chain has become. Even the most trusted verification vendors can be the weak link. Similar breaches in the UK Ministry of Defence, Co-op supermarket, and M&S have shown how third-party providers can be relentlessly exploited by cybercriminals.
The stakes are higher today because of advances in AI. Leaked selfies and ID scans can be weaponized through deepfake technology, allowing criminals to create convincing synthetic identities. These can then be used to open bank accounts, apply for loans, or manipulate facial recognition systems.
Such incidents also expose the limits of existing regulation. Discord and Tea both introduced stricter age verification measures to comply with new laws — including the UK’s Online Safety Act, France’s Security and Regulation of the Digital Space law, and the EU’s Digital Services Act. These frameworks reject self-declared age fields as inadequate, requiring verifiable checks instead. But they stop short of prescribing detailed cybersecurity standards, leaving enforcement murky.
The Regulation Paradox
In a recent press release, the UK’s Department of Science, Innovation and Technology acknowledged the growing risks tied to these checks. It advised companies to confirm users’ ages “without collecting or storing personal data, unless absolutely necessary.” This guidance mirrors the principles of the EU’s General Data Protection Regulation (GDPR), which emphasizes minimal data retention.
Yet, as the Discord and Tea breaches demonstrate, regulatory ideals are difficult to uphold when data flows across borders. Many verification providers are based outside the UK or EU, beyond the direct reach of domestic privacy watchdogs like the Information Commissioner’s Office (ICO) or Ofcom.
Experts warn that the implementation of age verification urgently needs review — not just in terms of compliance, but enforcement. Without stronger oversight and accountability, the very tools designed to make the internet safer could instead be eroding the foundation of digital trust.
As global regulators tighten rules for digital safety, the growing reliance on ID-based age checks is exposing millions of users to new risks — from identity theft to deepfake exploitation.
