Google DeepMind CEO Demis Hassabis has asserted that the primary threat from AI does not stem from job automation but from the potential misuse of the technology as it becomes increasingly powerful.
Speaking at a recent forum, Hassabis cautioned against underestimating the ethical and security implications of AI systems in the coming years. While much public discourse focuses on job displacement and economic disruption, he said these issues, though important, are overshadowed by the risk of AI being leveraged for malicious purposes.
Call for Robust Governance and Global Cooperation
Hassabis emphasized the urgent need for international collaboration in shaping regulatory frameworks. He highlighted risks ranging from misinformation and deepfakes to the misuse of autonomous systems in military or surveillance applications. “The future of AI must be safeguarded through proactive oversight,” he said.
He called for transparency in AI development processes, citing the importance of shared ethical standards across borders. The DeepMind CEO underscored the need for safety mechanisms to be embedded in AI design from the outset, rather than being retrofitted after deployment.
His remarks follow a growing list of warnings from tech leaders and policymakers about the darker possibilities of unregulated AI, including potential misuse by authoritarian regimes or rogue actors.

Not Just an Economic Conversation Anymore
While job displacement due to automation remains a concern, Hassabis argued that the public narrative should broaden. “This isn’t just about employment. It’s about ensuring that AI, as a dual-use technology, doesn’t become a threat to global stability,” he noted.
Experts in AI ethics echoed his concerns, advocating for democratic accountability and an international AI watchdog. With AI systems now touching sectors from finance to defence, the stakes are unprecedented.
The DeepMind CEO’s comments are likely to influence upcoming global tech policy discussions and may push governments to move faster in establishing global AI governance protocols.