In a stark warning about the growing threat of artificial intelligence-driven cybercrime, a 65-year-old woman in the capital was duped of ₹2 lakh after fraudsters used a cloned version of her daughter’s voice to stage a fake kidnapping call. The caller, sobbing and claiming to be in danger, pressured her to transfer money immediately. The family later confirmed the daughter was safe at home and the call was entirely fabricated.
Investigators say the case reflects a rapidly emerging pattern in which criminals use short voice samples from social media to create near-identical voice replicas. By posing as relatives in distress, they exploit emotional panic and push victims into making instant digital payments.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
Seconds of audio enough to build a fake caller
Cyber experts explain that fraudsters first scan social media profiles of potential targets and download videos or voice clips of family members. With just 3–5 seconds of clear audio, AI tools can generate a realistic voice model. Using an unknown number, the scammer then calls the victim and spins an urgent story — typically involving an accident, arrest or kidnapping.
To heighten credibility, callers use crying sounds, background noise and phrases such as “send money now or it will be too late,” leaving victims with no time to verify. Payments are usually demanded through UPI or bank transfer, ensuring quick fund movement.
Women and the elderly most vulnerable
Cybercrime analysts note that elderly individuals and women living alone are the primary targets, as scammers rely on emotional manipulation rather than technical deception. Projections suggest AI-enabled fraud cases could rise by around 40% in 2026, driven by easy access to voice-cloning tools and widespread social media exposure.
The trend is not limited to India. In the United States, people aged 60 and above lost billions of dollars to similar scams in 2024. Several cases involved parents receiving fake kidnapping calls in their children’s voices, underscoring the global scale of the threat.
Voice recognition no longer reliable
Authorities warn that AI has significantly increased the believability of scam calls. Familiar voice recognition — once considered a basic safety check — can no longer be trusted. Officials now recommend multi-step verification before responding to any emergency request involving money.
Cybersecurity teams stress that any urgent financial demand over the phone should be treated as suspicious until independently verified.
Safety measures families should adopt
Experts recommend a set of simple but critical precautions:
- Create a family code word: In emergencies, only the person who knows the agreed secret word should be trusted.
- Call back on the known number: Disconnect and contact the relative directly using their saved phone number.
- Ask personal questions: Use details only the real person would know to test the caller.
- Restrict social media privacy: Avoid posting clear voice recordings or long public videos.
- Never share banking credentials: UPI PINs, OTPs, CVV numbers and passwords must never be disclosed over calls.
Victims are advised to immediately report suspected fraud to the cybercrime helpline 1930, as early reporting increases the chances of freezing and recovering the transferred amount.
Financial loss and psychological trauma
Experts say the impact of such scams extends beyond monetary damage. Elderly victims often experience guilt, anxiety and long-term fear of phone calls. Families are being urged to maintain regular contact with senior members and raise digital awareness to reduce vulnerability.
With AI tools becoming more sophisticated and accessible, cybercriminal tactics are evolving at an unprecedented pace. Authorities emphasise that verification before payment must become a non-negotiable rule, as voice alone can no longer be considered proof of identity in emergency situations.
About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.
