The Indian government has sharply reduced the time online platforms have to respond to orders involving artificial intelligence–generated content, reflecting growing concern that deepfakes and synthetic media can spread faster than existing safeguards can contain them.
Under amendments notified on February 10 to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, social media companies will now have just three hours to take down AI-generated or synthetic content once it is flagged by a court or a competent authority. The earlier window allowed up to 36 hours.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
The revised rules, issued by the Ministry of Electronics and Information Technology, will come into force on February 20, 2026. Officials say the accelerated timeline is intended to prevent viral harm—particularly in cases involving impersonation, misinformation, or non-consensual imagery—where delays can render enforcement meaningless.
Defining Deepfakes and Treating Them as Information
For the first time, the rules formally define “synthetically generated information,” encompassing audio, visual, or audio-visual material created or altered using artificial intelligence in a way that makes it appear real or authentic. Importantly, the amendments clarify that such content will be treated as “information” under existing IT rules, placing it on the same legal footing as text posts or videos when determining whether an unlawful act has occurred.
The definition excludes routine edits, accessibility enhancements, and good-faith educational or design work, an attempt to avoid overreach into legitimate uses of AI tools. But officials stressed that the intent is to close gaps that have allowed harmful deepfakes to circulate without clear accountability.
By bringing AI-generated content squarely within the regulatory framework, the government is signaling that novelty will no longer be a defense. Whether created by a human or a machine, content that violates the law will be judged by the same standards.
Labels, Metadata and Platform Responsibility
One of the most consequential changes is the introduction of mandatory labeling. Platforms that enable the creation or sharing of synthetic content must ensure it is “clearly and prominently” marked as AI-generated and, where technically feasible, embedded with permanent metadata or identifiers.
Once applied, these labels or markers cannot be removed or suppressed by intermediaries. The rules also require platforms to deploy automated tools to proactively prevent the circulation of illegal AI content, including material related to child sexual abuse, false or forged documents, explosives, impersonation, or deceptive and non-consensual media.
The obligations apply across major platforms, including X and Instagram, which have faced mounting scrutiny globally over their handling of manipulated media. Grievance redressal timelines for users have also been shortened, increasing pressure on companies to respond quickly not only to government orders but also to individual complaints.
Balancing Innovation, Speech and Harm
The amendments arrive amid a broader international debate over how governments should regulate artificial intelligence without stifling innovation or free expression. In India, officials have repeatedly pointed to the risks posed by deepfakes during elections, communal tensions, and financial frauds, arguing that existing rules were designed for a pre-AI internet.
By tightening takedown deadlines and mandating labels, the government is placing greater responsibility on intermediaries to act as gatekeepers—an approach that critics say could encourage over-censorship, while supporters argue it is necessary in a high-speed digital environment.
What is clear is that the regulatory posture has shifted. AI-generated content is no longer being treated as an edge case but as a central feature of the online ecosystem—one that, in the government’s view, requires faster enforcement, clearer identification, and fewer gray areas.
As the new rules take effect later this month, their real impact will depend on how platforms implement them in practice—and how courts interpret the balance between rapid takedowns and the rights of users in an era where reality itself can be algorithmically manufactured.
About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.
