Governments Weigh Compliance Audits for High-Impact AI Models

Global Crackdown On Deepfakes: India Engages 30 Countries To Build Strict AI Regulatory Framework

The420 Web Desk
4 Min Read

New Delhi: Amid rising concerns over the misuse of artificial intelligence, particularly deepfakes and synthetic misinformation, India has stepped up efforts to develop a stringent global regulatory framework. The government is in discussions with more than 30 countries on the technical and legal aspects of AI governance to establish common standards, accountability mechanisms and compliance models at the international level.

A Multilateral Push for AI Governance

Policy signals indicate that identification of AI-generated content will no longer remain optional. Under the proposed rules, mandatory watermarking and clear labelling of synthetic material are being considered so that users can distinguish between authentic and artificially created content. The move is seen as crucial in the context of growing use of deepfakes in electoral manipulation, financial fraud, digital arrest scams and social polarisation.

According to the government’s assessment, deepfakes pose not only a technological challenge but also a serious threat to institutional trust. Misleading audio-visual content can erode confidence in families, social identities, media and governance structures. As a result, a shared responsibility framework is being proposed for AI developers, social media platforms and content creators, including obligations related to transparency, rapid takedown of harmful content and periodic risk-assessment reports.

Certified Cyber Crime Investigator Course Launched by Centre for Police Technology

Mandatory Watermarking and Labelling

Recent directives have already asked platforms to deploy technical systems to detect AI-generated content and clearly mark such material. Strict action is expected against synthetic content that is illegal, misleading, sexually exploitative or capable of disturbing public order. Work is also underway on traceability mechanisms to quickly identify the source of viral deepfakes and curb cross-border disinformation campaigns.

Policymakers emphasise that the key challenge lies in ensuring responsible use without stifling innovation or free speech. The regulatory approach is therefore centred on a “trust-by-design” model, where risk-mitigation features are embedded at the platform level. This may include algorithmic accountability, user grievance redressal systems and content authenticity protocols.

Traceability and Accountability Framework

Experts believe India’s initiative could play a significant role in shaping global AI governance. The country is not only one of the world’s largest digital consumer markets but also home to a rapidly expanding AI startup ecosystem. A multilateral framework could lead to shared standards on deepfake detection, metadata tagging, data protection and content verification.

At the technical level, discussions are focusing on watermarking technologies, AI model logging, automated detection tools and platform compliance audits. The proposed framework may also introduce risk-based classification of AI systems, with enhanced oversight for high-impact models.

With the digital economy expanding and AI adoption accelerating, it is evident that stricter compliance, transparency and accountability norms for AI platforms are on the horizon. The policy objective is to ensure that technology strengthens trust rather than undermines it, maintaining a balance between innovation and user safety while safeguarding the credibility of information ecosystems.

Stay Connected