Cyber law expert Prashant Mali addresses law enforcement officials during the Centre for Police Technology webinar on India’s amended IT Rules, 2026 and deepfake accountability.

Centre for Police Technology Webinar: ‘If It Looks Real, Label It’ — Expert Explains India’s Deepfake Rules

Titiksha Srivastav
By Titiksha Srivastav - Assistant Editor
5 Min Read

India’s amended Information Technology Rules, 2026  aimed squarely at tackling deepfakes and AI-powered cybercrime took centre stage at a special webinar hosted by the Centre for Police Technology this week. Titled “Crackdown on Deepfakes: India’s New IT Rules Implement – AI Powered Cyber Crime,” the session brought together law enforcement stakeholders, legal experts and cybersecurity professionals to decode the government’s latest regulatory push.

Leading the discussion was cyber law expert and advocate Prashant Mali, who offered a detailed breakdown of how the new provisions redefine liability in the age of generative AI.

Mali began by explaining that the amended rules now formally define “synthetically generated information” under Rule 2(wa), a move he described as foundational. According to him, the definition is designed to distinguish deceptive AI output from legitimate digital editing.

Routine enhancements such as colour correction, formatting, transcription, accessibility improvements or template-based document creation would not qualify as synthetic content so long as they do not materially alter the substance, context or meaning of the original material. The legal threshold, he stressed, hinges on deception.

“The key principle,” Mali said during the webinar, “is whether the AI output deceives about real persons or events. If it looks real, you must say it.”

He cautioned that while the rules attempt clarity, their interpretation will ultimately evolve through judicial scrutiny.

A major focus of the session was the new mandatory labelling requirement. Under the amended rules, AI-generated images, videos and audio that appear authentic must carry clear disclosure. Visual content requires permanent and prominent labelling. Audio content must include a spoken declaration stating that it is synthetically generated. In addition, embedded metadata and unique identifiers must enable provenance tracking.

Mali warned that removing or suppressing such disclosures  for instance by erasing AI watermarks from generated content — would amount to non-compliance and expose creators to legal consequences.

Perhaps the most emphatic message of the webinar concerned individual liability. Mali dismantled the common defence that AI tools themselves are responsible for misleading output.

“You type the prompt, you click generate, you post it online — you are liable,” he said, underscoring that the person directing the creation bears responsibility, not the AI model.

Creators who publish non-compliant AI content risk immediate takedown, suspension or termination of accounts, disclosure of identity to victims and potential criminal prosecution under applicable laws, including provisions under the Bharatiya Nyaya Sanhita and child protection statutes.

The webinar also addressed platform accountability. Intermediaries such as social media platforms must deploy technical measures to identify and block prohibited synthetic content. They are required to implement user declaration systems and prominently label confirmed AI-generated material. Failure to exercise due diligence could result in loss of safe harbour protection under Section 79 of the IT Act.

Enforcement timelines have also tightened significantly. Government directives must now be complied with within three hours, reflecting what experts described as an aggressive compliance stance. Serious offences, including child sexual abuse material and non-consensual intimate imagery, trigger mandatory reporting obligations.

Mali described the rules as marking a shift from advisory compliance to enforceable accountability. Deepfake deception, he explained, is broadly defined to include false depiction of a real person’s identity, voice, conduct or statements, as well as fabricated real-world events.

“These are not creative choices,” he told participants. “These are actionable criminal offences.”

Joining the discussion, Garima Goswamy emphasised the urgent need for coordinated capacity-building across police units, noting that “deepfake investigations demand not just technical tools, but legal literacy and procedural clarity at the ground level.”

Dr. Arindam Sarkar underscored the technological dimension of enforcement, observing that “traceability and metadata integrity will become central evidentiary pillars in AI-driven offences.”

The Centre for Police Technology’s webinar highlighted growing concern within law enforcement about the misuse of AI tools for impersonation, misinformation and digital harassment. As generative AI becomes more accessible, regulators appear intent on ensuring traceability and accountability.

While questions remain about implementation and judicial interpretation, the message delivered during the session was clear: India’s new IT Rules aim to leave little ambiguity about responsibility in the age of synthetic media.

Stay Connected