Sanjeev Sanyal, member of the PM’s Economic Advisory Council, proposes a financial-markets-inspired model for AI regulation with tools like circuit breakers and explainability audits, warning of catastrophic failures if ignored.
Sanjeev Sanyal has urged the adoption of a financial-markets-style regulatory framework for artificial intelligence. Comparing future AI failures to financial crashes, he warns that opaque, interconnected AI systems left unchecked could lead to cascading failures across critical infrastructure. With India set to host a global AI summit next year, his call reignites a vital and urgent debate.
A “Stock Market Approach” to AI Regulation: Sanyal’s Third Way
Speaking on a podcast, Sanjeev Sanyal — Member of the Prime Minister’s Economic Advisory Council — issued a no-holds-barred warning about the unchecked rise of artificial intelligence. Unlike the United States’ hands-off litigation-heavy model or the European Union’s tiered risk-based approach, Sanyal advocates a third, adaptive path.
Drawing a bold analogy with stock market oversight, he proposed creating an independent AI regulatory body, inspired by SEBI, with powers to impose circuit breakers, conduct explainability audits, and enforce manual overrides to halt autonomous AI activity when necessary.
“Do I need to know where a share price will go to regulate the stock market? No. So why assume we can predict AI’s trajectory?” he asked, invoking complexity theory to argue that uncertainty is baked into any dynamic AI system. Hence, trying to pre-categorize AI risks may be futile. Instead, the system should be built to respond to failure, not just to predict it.
The Real Risks: Cascading Failures, Black-Box Systems, and the ‘Internet of AI Things’
Sanyal’s remarks come against the backdrop of a growing sense of unease over how deeply AI systems have permeated daily life—from financial transactions to traffic flows, cybersecurity operations, and content moderation.
He cited a real-world case: a software update gone wrong last year that took down cloud services, airports, and ATMsglobally. That was just static code. The stakes multiply when the system evolves and acts on its own.
“Now imagine a black-box AI system failing — with no human fully understanding what went wrong,” Sanyal said. In his proposed framework, AI systems in finance, power grids, transportation, and healthcare would be deliberately siloed with firewalls to prevent interlinked collapses. The vision stands in contrast to what he derided as the “Internet of AI Things”—a highly connected ecosystem where failure in one domain could spiral into widespread disruption.
His worry: as AI grows more autonomous and multi-modal, the likelihood of simultaneous failure across sectors will increase unless firewalls and override protocols are hardcoded from the start.
India’s Sovereign Stand: Demanding AI Accountability on Domestic Soil
Sanyal also pushed for regulatory sovereignty, stressing that even foreign tech giants must be made to disclose how their AI systems function within Indian territory. “Even foreign companies operating in India should be required to explain how their AI systems function,” he stated, hinting at mandatory transparency obligations for global platforms and large language models (LLMs).
India’s growing role as an AI development hub makes this stance even more urgent. With a Global AI Summit planned in India next year, Sanyal expressed hope it would move discussions away from technical awe and back toward real-world risks and governance models.
“Everyone’s dazzled by LLMs. AI regulation has been completely sidelined. We must act before a catastrophic failure forces our hand,” he said. His remarks serve as both a national blueprint and an international wake-up call—urging top AI-developing nations to step up and engage in collaborative governance before global systems are tested in the worst way possible.