New Delhi | India’s head coach Gautam Gambhir has approached the Delhi High Court, taking a strong legal stand against the misuse of artificial intelligence through deepfakes and digital impersonation. In a civil suit, he has sought strict action to prevent the unauthorised use of his name, image and voice.
The development comes amid a sharp rise in AI-driven frauds involving fake videos, voice cloning and face-swapping technologies. According to his legal team, since late 2025, a surge of fabricated content using his identity has been circulating across social media platforms, falsely portraying him making statements he never issued.
FutureCrime Summit 2026 Calls for Speakers From Government, Industry and Academia
Investigations have revealed that one such deepfake video falsely depicting his resignation went viral, garnering millions of views. These manipulated videos were not only used to spread misinformation but were also allegedly monetised for financial gains.
A total of 16 parties have been named as defendants in the case. These include multiple social media accounts as well as e-commerce platforms such as Amazon and Flipkart. Major technology firms including Meta Platforms, Google and YouTube have also been made parties to the suit.
Government departments related to information technology and telecommunications have also been included to ensure enforcement of any court directives. This highlights that the issue goes beyond individual grievance and touches upon the broader challenge of regulating digital misuse in an AI-driven ecosystem.
Gambhir has sought ₹2.5 crore in damages and requested the court to order immediate removal of all objectionable content, blocking of the concerned accounts, and a permanent injunction against any future misuse of his identity. He has also urged for an expedited hearing in the matter.
The case has been filed under the Copyright Act, 1957, the Trade Marks Act, 1999, and the Commercial Courts Act, 2015. It also cites key judicial precedents that recognise and protect personality rights.
Legal experts believe the case could set an important precedent in defining the contours of “digital personality rights” in the age of AI. While emerging technologies have made content creation easier, they have simultaneously amplified the risks of misuse.
In recent months, incidents involving deepfakes have risen significantly, targeting not just public figures but also ordinary citizens. Fake audio and video content are increasingly being used for misinformation, extortion, fraud and reputational damage.
Experts warn that the threat of deepfakes extends beyond individual harm, potentially undermining media credibility, public trust and even democratic processes. This has intensified calls for stronger regulations, stricter enforcement mechanisms and greater accountability from technology platforms.
All eyes are now on how the court responds to the plea. The outcome is expected to play a crucial role in shaping future legal and regulatory frameworks around AI misuse and digital identity protection in India.