Is Traceability Enough to Contain Synthetic Misinformation?

Ministry Of Electronics And Information Technology Deepfake Disclosure Rules — Ground Reality Check

The420 Web Desk
3 Min Read

New Delhi: The Government of India has formally mandated clearer disclosure, labelling, and traceability for AI-generated or “synthetically generated” content under the amended IT Rules, effective 20 February 2026. The move by the Ministry of Electronics and Information Technology (MeitY) is being seen as one of the strongest regulatory steps globally to address the growing threat of deepfakes. However, experts caution that implementation on the ground may prove far more complex than policy intent.

Detection vs Creation: The Speed Gap

One of the biggest hurdles is the detection versus creation speed gap. Generative AI tools can now produce realistic deepfake videos within seconds, while detection systems—though improving—still rely on probabilistic models that struggle with rapidly evolving synthetic techniques. Open-source research consistently shows that detection tools require continuous retraining as new models emerge.

Platform Inequality: Big Tech vs Startups

Platform diversity presents another major challenge. Large social media intermediaries possess the financial and technical resources to deploy watermarking, provenance tagging, and AI moderation systems. However, smaller startups, SaaS providers, and open-source ecosystems may face compliance strain due to infrastructure costs and limited AI governance maturity.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

Traceability Breaks Across Platforms

Traceability also weakens outside controlled ecosystems. Even if metadata markers are embedded at the point of creation, they can be stripped when content is downloaded, screen-recorded, compressed, or re-uploaded across apps. Global studies on digital watermarking indicate that cross-platform content transfer often breaks provenance chains.

Cross-Border and Jurisdictional Challenges

Further, regulation assumes user compliance in disclosing synthetic content. Bad actors, by definition, are unlikely to self-label manipulated media. Cross-border content flow adds another layer of complexity, as deepfakes generated outside India but consumed domestically raise jurisdictional and enforcement issues.

Operational Pressure on Intermediaries

Finally, faster takedown timelines increase operational pressure on intermediaries, requiring advanced AI moderation tools and skilled review teams.

While the rules mark a decisive governance step, experts agree that long-term effectiveness will depend on technology upgrades, economic feasibility, and widespread digital literacy. Implementation, they say, will be a marathon—not a sprint.

Stay Connected