New Delhi – In a major move to combat the rising threat of deepfakes and AI-generated misinformation, the Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The proposed changes seek to regulate “synthetically generated information” — commonly known as deepfakes — by mandating visible labels, embedded metadata, and content verification before publication.
Why the Amendments Were Needed
According to MeitY, the exponential rise of generative AI tools has made it increasingly easy to produce and circulate deepfake videos, manipulated voices, and AI-altered images that appear authentic.
“With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information, the potential for misuse to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” the ministry said in its notification.
Recent instances of fake political videos, AI-cloned voices, and deepfake celebrity endorsements have amplified public concern, prompting the government to act.
FCRF Launches CCLP Program to Train India’s Next Generation of Cyber Law Practitioners
What Counts as “Synthetically Generated Information”?
The draft defines it as information “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.”
This definition brings AI-generated images, videos, voice clones, and deepfake media squarely under the legal ambit of the IT Rules, 2021 — ensuring accountability for both creators and platforms.
New Compliance Mandates for AI Platforms
Under the newly proposed Rule 3(3), platforms that enable or facilitate the creation of synthetic content — such as AI image, video, or voice generators — must implement stringent measures to identify and label such content.
Key requirements include:
- Every piece of synthetically generated content must carry a visible label.
- Platforms must embed a permanent metadata tag or identifier that is either visible (for visuals) or audible (for audio).
- The label must cover at least 10% of the visual surface, or for audio content, the first 10% of its duration.
- Any attempt to modify or remove these identifiers will be prohibited.
These measures aim to ensure that AI-generated content is immediately identifiable to users and cannot be misused for deception.
Tougher Obligations for Social Media Giants
A new sub-rule — 4(1A) — introduces additional compliance responsibilities for “Significant Social Media Intermediaries” (SSMIs) such as Meta (Facebook, Instagram), X (Twitter), YouTube, and others.
Under this rule, platforms must:
- Prompt users to declare whether a post or upload contains synthetically generated information before publishing.
- Deploy automated verification tools or other technical measures to validate these declarations.
- Clearly label AI-generated or manipulated content for all users.
- Treat any failure to comply as a violation of due diligence obligations under the IT Rules.
This essentially places the burden of responsibility on both the user and the platform to ensure transparency in the dissemination of digital content.
Expanded Definition of “Information”
MeitY has also clarified that the term “information” under the IT Rules will now explicitly include synthetically generated information, ensuring that all AI-manipulated or algorithmically generated material falls within the legal and regulatory framework of India’s digital governance regime.
Consultation and Next Steps
The government has invited feedback from industry stakeholders, digital platforms, and policy experts.
Comments and suggestions can be sent to itrules.consultation@meity.gov.in until November 6, 2025.
Post consultation, MeitY is expected to finalise the amendments and issue a notification to bring them into effect.
The Bigger Picture
India’s move to tighten the regulatory net around deepfakes places it among the few major economies actively defining AI accountability frameworks.
If implemented effectively, these amendments could serve as a global model for responsible AI governance, balancing innovation with user protection.
For social media platforms, this signals the start of a new era of compliance-heavy operations, where transparency labels and traceable metadata become standard practice.
In essence, India is sending a clear message:
AI creativity is welcome — but AI deception will no longer go unchecked.
