At the NDTV World Summit 2025 in New Delhi on October 18, Minister Ashwini Vaishnaw declared that India will “very soon bring out regulations on the deep-fakes” problem. He warned that while artificial intelligence enables harmless novelty — turning a youthful face older or younger, for instance — it can also “harm society in ways humans have never seen before.” The minister framed the government’s approach as distinctly “techno-legal”. “The world of AI cannot be regulated simply by passing a law. There has to be a technical solution to it,” he said. In doing so, India’s technology-regulation strategy appears at a crossroads: one path built on innovation and rapid deployment of AI infrastructure; the other rooted in safeguarding rights, identities and democratic processes.
Innovation at Full Throttle, Regulation to Catch Up
During the summit, Vaishnaw revealed that India is supporting six major AI models, two of which will reportedly use around 120 billion parameters and aim to be “free from biases like Western models have.” He also noted that two semiconductor assembly units have commenced production domestically, to feed the compute hunger of AI development. These announcements underline a broader governmental push: build the infrastructure, attract investment (including a $15 billion commitment by Google LLC for an AI facility in Visakhapatnam), and accelerate India’s role in global AI.
Last Week to Enrol: FCRF’s Certified Cyber Law Practitioner Program Begins Oct 25
However, this high-speed rollout coincides with rising concerns about misuse. Deep-fakes — synthetic media that can replicate voices, create false scenarios or impersonate real people — present novel regulatory challenges. Vaishnaw emphasised that “your face and your voice should not be used in a harmful way for society.”
For many observers, India is running ahead of its own regulatory apparatus: the legal frameworks such as the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 are still being absorbed while new risks like AI-enabled disinformation surge across social and digital spheres.
The Regulatory Landscape and the Deep-Fake Impetus
Until now, India has issued advisories (for example, in March 2024 on AI and deep-fake under existing IT rules) but not full-blown regulation specific to synthetic media. Vaishnaw’s hint at “very soon” is expected to translate into a dual-track strategy: legislative amendments paired with technical mandates and enforcement tools. His comparison to the European Union — “they simply focus on regulating things. Our bias is more towards innovation” — underscores a distinct Indian stance. Legal experts say the challenge will lie not just in defining deep-fakes, but in implementing detection systems, chain-of-custody frameworks, platform responsibility, and cross-border cooperation. A law without technical infrastructure, they warn, may prove toothless. Vaishnaw’s statement that “technical solution plus regulation combined with it … is going to be more effective” reinforces this.
As India readies its regulatory machinery, issues remain in the spotlight: What constitutes harmful deep-fake content? What bar should platforms meet? How will rights of expression and anonymity be balanced against manipulation and fraud?
These questions are poised to shape the next chapter of India’s digital governance.
What It Means for Stakeholders Now
For technology companies, the minister’s announcement signals that compliance and technical readiness must ascend the agenda. Firms building or deploying generative AI models may soon face new obligations around provenance, transparency and model accountability.
Platforms hosting user-generated media will likely be asked to institute “red-flag” systems for synthetic content, historic version tracking, or validated user identity pipelines. For civil society and media, the regulation thrust raises concerns about over-reach — will litigation, liability and enforcement chill creative expression or debate online?
Meanwhile, for individual citizens, the message is clear: your voice and your face can become digital weapons. “It’s your right,” said the minister, “to make sure that your face and your voice should not be used in a harmful way for society.” The coming regulation will test whether India can combine the agility of its innovation ecosystem with the safeguards of a mature rule-of-law regime. As the countdown begins, one thing is certain: the era of synthetic media governance has arrived.