NEW DELHI — At 2:30 p.m. on February 17, in L1 Meeting Room No. 15 at Bharat Mandapam, the session titled “AI for Secure India: Combating AI-Enabled Cybercrime, Deepfakes, Darkweb Threats & Data Breaches” drew a standing-room-only audience.
The discussion was curated by the Future Crime Research Foundation (FCRF), which served as a Knowledge Partner at the India AI Impact Summit. The panel featured Prof. Triveni Singh, former IPS officer and Chief Mentor at FCRF; Rakesh Maheshwari, cyber law and data governance expert; Dr. Sapna Bansal of Shri Ram College of Commerce, University of Delhi; Tarun Wig, Co-Founder and CEO of Innefu Labs; and Senior Advocate Vivek Sood of the Supreme Court of India. The session was moderated by Navneethan M., Senior Vice President and Chief Information Security Officer.
Rather than opening with prepared statements, the session began with a direct provocation: Is AI in cybersecurity hype—or necessity?
The tone was analytical, not alarmist. Artificial intelligence, the panel suggested, is no longer an optional layer in cyber defense. It has become integral to both offense and protection. The battlefield itself has turned algorithmic.
When Attackers Move at Machine Speed
The conversation quickly examined whether AI has tilted the cyber battlefield in favor of attackers. Automated phishing campaigns now generate hyper-personalized messages at scale. Malware variants can be iterated rapidly. Social engineering tactics have become more sophisticated, less detectable and faster to deploy.
A central question followed: Are cyber incidents actually increasing, or are detection systems simply getting better?
The exchange underscored a paradox. AI enhances threat detection—processing terabytes of data in real time—but it also enables adversaries to automate reconnaissance, vulnerability scanning and exploitation. Breach timelines have compressed. What once unfolded over weeks can now occur within hours.
Traditional controls—static passwords, manual audits, rule-based filters—were described as increasingly inadequate against adversaries leveraging adaptive AI systems. The implication was clear: defensive architectures must evolve as dynamically as the threats they confront.
Deepfakes, Dark Web Markets and the Erosion of Trust
Midway through the session, the discussion pivoted toward synthetic media and the destabilization of digital trust.
When voice and video can be convincingly fabricated, how do institutions verify authenticity? Are deepfakes merely a financial fraud problem—or do they pose national security risks?
The panel explored the rise of AI-enabled dark web marketplaces offering Crime-as-a-Service models. Phishing kits, synthetic identity generators and deepfake production tools are now accessible to actors with limited technical sophistication. The barrier to entry for cybercrime has dropped, while the potential impact has grown.
Data governance surfaced as a parallel concern. Weak internal controls, fragmented compliance structures and delayed breach reporting can amplify the damage caused by AI-driven attacks. The discussion suggested that poor data stewardship may be as significant a vulnerability as external hackers.
Legal frameworks were also examined. As AI-generated content complicates attribution and evidentiary standards, the adequacy of existing cyber laws comes under scrutiny. Can courts effectively address crimes where authenticity itself becomes contested?
Toward Sovereign AI Security
In the session’s closing arc, the lens widened from enterprise cybersecurity to national strategy.
Should AI security be treated as critical national infrastructure?
As artificial intelligence becomes embedded in banking networks, healthcare systems, governance platforms and defense logistics, securing AI systems begins to resemble a sovereign responsibility rather than a private-sector compliance function.
The moderator steered the dialogue toward forward-looking solutions: Can AI be effectively deployed to hunt criminals? Can CISOs trust AI-driven decision-making during crisis moments? What is the single most important action India must take in the next twelve months to strengthen AI resilience?
The responses converged around one theme: security must be designed into AI ecosystems from inception. Reactive enforcement will not suffice in an environment defined by machine-speed threats.
Curated by FCRF in its capacity as Knowledge Partner, the session provided a structural counterpoint to the broader optimism surrounding artificial intelligence. While other discussions celebrated compute power and model scale, this panel emphasized governance, accountability and systemic resilience.
Inside Room No. 15, the conversation made one point unmistakable: in the age of synthetic reality, innovation without security risks eroding the very trust upon which a digital republic depends.
And as India accelerates toward an AI-driven future, that trust may prove to be its most valuable infrastructure.
