On February 10, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, a dense legal instrument that nonetheless pivots on a deceptively simple idea: defining what synthetic media is, in law.
For the first time, Indian internet regulation formally recognizes “synthetically generated information” as a distinct category of content—audio, visual or audio-visual material that is algorithmically created or altered to appear real. The definition is deliberately expansive, covering voice clones, AI-generated videos and manipulated images, while carving out limited exceptions for good-faith editing, accessibility tools and routine document preparation.
By embedding this definition directly into the 2021 intermediary rules, the government has ensured that synthetic content is no longer a peripheral concern but a core regulatory object, treated on par with other forms of unlawful online information. In effect, deepfakes are no longer just a technical phenomenon; they are now a legally cognizable risk.
From Passive Hosts to Active Gatekeepers
The amendments mark a decisive shift in how platforms are expected to behave. Intermediaries that enable the creation or dissemination of synthetic media are now required to deploy “reasonable and appropriate technical measures,” including automated tools, to prevent the generation or circulation of unlawful synthetic content.
The rules go further. Where synthetic content is not illegal per se, platforms must ensure it is clearly and prominently labelled, with visible disclosures for video and prefixed audio warnings for sound-based content. Crucially, such material must also be embedded with permanent metadata or provenance markers—unique identifiers that trace the content back to the platform or tool used to create it.
Equally significant is what platforms may not do: they are prohibited from enabling the removal or suppression of these labels and metadata. The obligation is not merely to inform users, but to preserve the integrity of disclosure itself, making synthetic origin resistant to erasure.
This approach signals a regulatory philosophy that treats platforms less as neutral conduits and more as infrastructural gatekeepers, responsible for shaping how authenticity is signalled in the digital public sphere.
Speed, Surveillance and the Compression of Time
Alongside new content duties, the amendments quietly compress the timelines within which intermediaries must act. In several provisions, response windows that once ran into days or dozens of hours are cut to single-digit hours. Orders from law enforcement authorities to disable access to unlawful content must now be acted upon with far greater speed, and authorised officers issuing such directions must be specifically designated in writing.
The rules also sharpen the consequences for users. Platforms are instructed to warn, at least once every three months, that violations involving unlawful content—including synthetic media—may lead not only to account termination but also to criminal liability under a widening array of statutes, from election law to sexual offences legislation.
In cases involving victims, intermediaries may be required to identify and disclose the identity of the violating user, without vitiating evidence. The balance between privacy, due process and victim redress is thus recalibrated toward faster intervention and greater traceability.
Regulating Trust in a Synthetic Public Square
Read together, the amendments reflect an anxiety that runs deeper than any single technology. They are an attempt to regulate trust itself—trust in images, in voices, in digital records, and ultimately in the online public square where political speech, personal reputation and criminal evidence now routinely intersect.
By insisting on labelling, metadata and proactive detection, the government is betting that technical provenance can restore some measure of certainty in an environment increasingly shaped by generative AI. Critics are likely to question how these obligations will be implemented in practice, especially by smaller platforms, and how automated enforcement will avoid over-reach or error.
But the direction of travel is unmistakable. With these rules, India has moved beyond treating deepfakes as an episodic threat and has instead woven synthetic media into the permanent architecture of internet regulation—an acknowledgment that in the years ahead, the authenticity of what we see and hear online will no longer be assumed, but regulated.
