As investors pour trillions into artificial intelligence, echoes of past bubbles—from the dot-com crash to the subprime meltdown—loom large. Beneath Silicon Valley’s optimism lies a tangle of financial circularities, untested returns, and technological limits that could define this era’s defining bet.
The Familiar Arc of Euphoria
Artificial intelligence, the great promise of the digital age, is retracing a well-worn path of economic exuberance. The cycle—mapped as early as 1837 by Lord Overstone—moves from confidence and prosperity to overtrading and distress. Today’s AI mania sits squarely in that arc’s upper curve.
From neural networks and expert systems to large language models (LLMs) and generative AI, the technology’s promise has swollen into a narrative of inevitability. Proponents envision the “singularity”—a moment when machine intelligence surpasses the human mind, blending creativity and computation into a seamless whole.
Yet beneath the surface of this technological faith lies an uncomfortable symmetry with the speculative booms of the past. AI has become less a field of research than a belief system, fueled by colossal investment and a moral insistence that underinvesting—not overinvesting—is the true risk.
The Numbers Behind the Hype
By some estimates, capital expenditure on AI could reach $5–7 trillion( ₹ 441 lakh crores- ₹619 lakh crores) by 2030, contributing nearly a full percentage point to U.S. GDP growth. AI-related companies now account for 80 percent of total U.S. stock returns, with startup valuations rising to $2.3 trillion(₹203 lakh crores) in 2025—nearly five times higher than in 2020.
But the financial foundations are increasingly fragile. The web of transactions sustaining the AI boom distorts balance sheets and inflates earnings. Hardware makers book record profits from GPU sales, while their buyers—AI startups and data-center operators—treat those purchases as long-term investments, depreciating assets over extended periods.
This circularity is not theoretical. Nvidia, for instance, invested $100 billion (₹88,519 crores) in OpenAI in 2025, only for OpenAI to use much of that capital to purchase Nvidia’s own chips. Similar loops have emerged between GPU lessors like CoreWeave and their suppliers. The result is an ecosystem where earnings and valuations feed off one another, echoing the intricate accounting gymnastics of the dot-com era.
AI investment, analysts warn, now stands at 17 times the scale of the 2000 tech bubble and four times the 2008 subprime boom, much of it financed not by equity but by $1.2 trillion(₹106 lakh crores) in debt, accounting for 14 percent of all investment-grade corporate borrowing.
The Economics of Uncertainty
The road to profitability remains elusive. OpenAI, despite its global visibility, generated $4.3 billion (₹38,072 crores) in revenue in the first half of 2025—but spent $4.5 billion(₹39,843 crores) on marketing and stock-based compensation alone, posting a $7.8 billion loss(₹69,055 crores)
To justify current valuations, revenues across the sector would need to expand more than twentyfold, exceeding $1 trillion annually to offset depreciation, energy costs, and infrastructure spending. Even Microsoft—whose Office and Windows suites rank among the most successful digital products ever—generates under $100 billion(₹88,563 crores) in annual revenue.
Meanwhile, China’s cheaper DeepSeek-R1 model, launched in 2024, has cast doubt on the Western model of capital-intensive AI development. Its open-source architecture threatens to erode the moat of firms that have bet heavily on proprietary systems.
And despite the hype around productivity gains, 95 percent of corporate AI pilots have failed to generate measurable revenue growth. Companies that replaced workers with automation often found themselves re-employing staff when the technology underperformed.
The Limits of Machine Intelligence
Beyond balance sheets, fundamental questions remain about the technology itself. Large language models—essentially probabilistic pattern-matching systems—excel at interpolation but fail at extrapolation. They predict rather than reason, and when trained on finite data, their marginal improvements diminish rapidly.
LLMs consume vast quantities of data, much of it scraped without authorization, raising disputes over copyright, compensation, and privacy. Even so, most models operate on incomplete or biased datasets, compounding misinformation and error.
Despite exponential increases in computing power, AI continues to stumble on simple factual reasoning. Its achievements in writing, coding, or medical diagnostics—while impressive—fall short of the “killer app” that once made email or Excel indispensable. Microsoft’s CEO, in a rare moment of candor, noted that AI has yet to deliver a comparable leap.
For now, generative AI remains an extraordinary tool—but perhaps also a costly parlor trick, dazzling in short bursts yet unproven in endurance. As history shows, such moments of conviction often precede correction. Whether this “good kind of bubble,” as Jeff Bezos once called it, will end differently this time remains to be seen.