Ghibli Glam or Privacy Scam? How AI is Stealing Your Face Without You Knowing

Titiksha Srivastav
By Titiksha Srivastav - Assistant Editor
4 Min Read

Social media platforms are currently flooded with Ghibli-style AI-generated images, thanks to OpenAI’s ChatGPT-4o. From Facebook and Instagram to X, users are enthusiastically sharing their AI-transformed pictures.

However, in the excitement of creating artistic versions of themselves, many are unknowingly handing over their facial data to AI companies. This also extends to photos of their families, including young children, raising serious concerns about privacy and biometric data security.

Your Face is Being Collected Every Day

This trend is not just limited to Ghibli-style images. People unknowingly provide facial data to AI companies daily—whether it’s for unlocking phones, tagging pictures on social media, or accessing various digital services.

ALSO READ: Empanelment for Speakers, Trainers, and Cyber Security Experts Opens at Future Crime Research Foundation

Whenever users upload images online or grant apps access to their camera, they often overlook the potential risks. AI companies scan and store facial features, creating a digital footprint that is even more valuable than passwords or credit card information. Unlike a password, which can be changed, a compromised facial identity remains exposed permanently.

One of the biggest issues is the casual approach people take toward digital security. Despite multiple warning signs, many continue to share their biometric data without concern.

A major example is the Clearview AI controversy, where the company was accused of scraping three billion images from social media, news websites, and public records without consent. These images were then compiled into a database and sold to law enforcement agencies and private firms.

Similarly, in May 2024, Australian company Outabox suffered a data breach that exposed the facial scans, driving licenses, and addresses of 1.05 million people.

This data was later leaked on a platform called ‘Have I Been Outaboxed,’ leading to identity theft, impersonation, and fraud complaints. Even facial recognition systems used in retail stores to prevent shoplifting have become prime targets for hackers.

Once stolen, this data often ends up on the dark web, enabling synthetic identity fraud and deepfake-related scams.

According to a report, the global market for facial recognition technology (FRT) is projected to reach $5.73 billion by 2025 and grow at a CAGR of 16.79%, hitting $14.55 billion by 2031.

Tech giants  have been accused of using users’ images to train their AI models without disclosing details. Furthermore, services allow anyone to find a person’s online presence using just a photograph, increasing the risk of stalking and other privacy violations.

ALSO READ: Now Open: Pan-India Registration for Fraud Investigators!

If you want to safeguard your biometric data, the first step is to stop engaging with AI-generated image trends like the Ghibli-style photos.

Additionally, avoid uploading high-resolution images on social media and opt for PINs or passwords instead of facial recognition for unlocking devices.

In an era where AI advancements blur the lines between creativity and exploitation, safeguarding personal data—especially biometric information—has become more crucial than ever.

While the appeal of AI-generated Ghibli-style images is undeniable, users must be aware of the hidden risks. Every uploaded photo contributes to a growing database that companies can use, manipulate, or even monetize without consent.

Simple steps like avoiding high-resolution image uploads, disabling unnecessary camera access, and opting for stronger authentication methods can make a significant difference.

Awareness is the first step toward digital security—before participating in AI trends, we must ask ourselves: Is a fun, stylized image worth risking our privacy forever?

Follow The420.in on

 TelegramFacebookTwitterLinkedInInstagram and YouTube

Stay Connected