1.2M Children Victims in UNICEF-Interpol Study.

Deepfake Trap: Over 1.2 Million Children Victimised Worldwide in a Year

The420.in Staff
5 Min Read

The rapid expansion of artificial intelligence has pushed child online safety into uncharted and dangerous territory. A new global study has revealed that more than 1.2 million children worldwide fell victim to deepfake abuse over the past year, underscoring how emerging technologies are being weaponised against minors at an alarming scale.

The findings come from a joint study conducted by UNICEF and Interpol, which examined the growing misuse of AI tools to manipulate children’s images and videos into explicit or exploitative content. The report warns that deepfake abuse is no longer a fringe cybercrime but a fast-spreading global threat with long-term psychological, social and economic consequences for young victims.

According to the study, authentic photographs and videos of children are increasingly being altered using AI-powered tools to create sexually explicit material. In many cases, victims are subjected to prolonged harassment, blackmail and intimidation, often without any effective mechanism for quick redress or content removal.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

Study Across 11 Countries, Over 50,000 Children Surveyed

The UNICEF–Interpol study was carried out across 11 countries between late 2023 and early 2025, covering regions in Asia, Africa, Latin America and Europe. More than 50,000 children aged 12 to 17, along with their parents, were surveyed to assess the scale, nature and impact of deepfake-related abuse.

The data shows that Asia and Africa recorded the highest number of reported cases, a trend experts link to expanding internet access, limited digital literacy and weak regulatory oversight. In several countries, the study found that one in every 25 children had been affected by some form of deepfake exploitation.

Researchers noted that the availability of low-cost or free AI image-generation tools, combined with minimal safeguards on social media and messaging platforms, has significantly lowered the barrier for perpetrators to create and circulate harmful content.

Girls Disproportionately Targeted, Nudification a Major Concern

The report highlights a stark gender imbalance among victims. Girls accounted for 64% of reported cases, with most subjected to nudification, a process in which clothed images are digitally altered to make them appear nude. Such content is often used to shame, threaten or coerce victims into silence.

Boys, who made up 36% of victims, were more frequently targeted through financial sextortion, where manipulated images or videos were used to extort money or other concessions. In many instances, perpetrators operated across borders, complicating investigations and delaying takedowns.

The study also found that deepfake material is widely circulated across mainstream social media platforms, private messaging apps and, in more severe cases, dark web networks—making identification and removal both time-consuming and technically challenging.

Weak Laws Fuel the Crisis

One of the report’s most concerning findings is that over 90% of cases emerged in countries lacking robust AI-specific or digital crime legislation. Existing legal frameworks, the study notes, are often ill-equipped to address crimes involving synthetic media, leaving victims with limited legal recourse.

The report further warns that more than half of online content globally has reached a point where distinguishing real from fake is increasingly difficult, eroding trust and complicating law-enforcement investigations.

Mental Health Fallout and Social Impact

Experts involved in the study stressed that the damage caused by deepfake abuse extends well beyond the digital realm. Many affected children exhibited symptoms of depression, anxiety, social withdrawal and declining academic performance. In extreme cases, families faced social stigma, prolonged legal battles and lasting reputational harm.

Call for Global Action

UNICEF and Interpol have urged governments to enact clear, enforceable laws targeting AI-driven crimes, strengthen rapid complaint-redress mechanisms and invest in digital literacy programmes for children and parents. Technology companies have also been called upon to deploy stronger deepfake-detection tools and introduce additional safeguards for child-related content.

The report warns that without swift and coordinated global action, deepfake abuse could soon emerge as one of the most severe digital threats facing children worldwide.

About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.

Stay Connected