Geoffrey Hinton warns that artificial intelligence poses dual risks—from misuse by bad actors to the possibility of autonomous systems—while recent AI-driven cyberattacks and growing concerns over regulation highlight the challenges of controlling rapidly advancing technology.

The AI Dilemma: Geoffrey Hinton Warns Of Dual Risks From Rapid AI Advancement

The420 Web Desk
6 Min Read

For Geoffrey Hinton, often described as one of the foundational figures of modern artificial intelligence, the dangers posed by the technology fall into two distinct categories: the misuse of AI by human actors and the possibility of AI itself becoming uncontrollable.

“There’s a big distinction between two different kinds of risk,” Hinton said. “There’s the risk of bad actors misusing AI, and that’s already here.” He pointed to the growing prevalence of deepfake videos, cyberattacks, and the potential for AI-assisted viruses as examples of how the technology is already being deployed with harmful intent.

This form of risk, he suggested, is immediate and visible. Yet it is separate from what he considers the more profound concern: the possibility that AI systems themselves could evolve into independent actors, no longer governed by human oversight.

FCRF Launches Premier CISO Certification Amid Rising Demand for Cybersecurity Leadership

A Shift in Control and Capability

Hinton warned that once artificial intelligence reaches a level of superintelligence, it may not only surpass human cognitive abilities but also develop an intrinsic drive to survive and exert control. In such a scenario, the current assumption—that humans can direct and contain AI systems—may no longer hold.

“The current framework around AI—that humans can control the technology—will therefore no longer be relevant,” he said.

To address this, Hinton proposed a conceptual shift in how AI systems are designed and understood. He suggested that future models might need to be imbued with what he described as a “maternal instinct,” encouraging them to act with care toward humans rather than dominance.

Drawing on analogy, he described a dynamic in which a more intelligent entity could still act protectively toward a less intelligent one. “They will be the mothers, and we will be the babies,” he said, outlining what he viewed as a potentially safer relationship between humans and advanced AI.

Evidence of Emerging Threats

Concerns about AI’s misuse have already begun to materialize. In November 2025, the company Anthropic reported disrupting what it described as the first documented case of a large-scale AI-driven cyberattack carried out with minimal human intervention.

According to the company, a Chinese state-sponsored group manipulated its Claude Code system in an attempt to infiltrate approximately 30 organizations, including technology firms, financial institutions, government agencies, and chemical manufacturers.

The incident has contributed to a growing belief among cybersecurity experts that AI could soon enable largely automated cyberattacks. Some analysts have warned that countries such as Iran may be capable of leveraging such tools to target critical infrastructure, including that of the United States.

These developments, experts say, illustrate how AI is already reshaping the threat landscape, accelerating both the scale and sophistication of cyber operations.

Incentives, Regulation, and the Limits of Control

Despite mounting concerns, Hinton expressed skepticism about whether existing incentives within the technology industry are aligned with long-term safety. He argued that companies and researchers are often driven by immediate technical challenges and short-term gains rather than broader societal outcomes.

“For the owners of the companies, what’s driving the research is short-term profits,” he said, noting that developers are typically focused on solving immediate problems rather than anticipating future consequences.

Hinton has advocated for stronger regulatory frameworks, but acknowledged that governance alone may not be sufficient. Each emerging risk, he suggested, requires a distinct solution, from countering deepfakes to preventing autonomous cyber threats.

He also pointed to the need for systems that can verify the authenticity of digital content, envisioning mechanisms akin to provenance signatures for images and videos to limit the spread of manipulated media.

Hinton’s warnings come amid broader debates about the future of AI, including perspectives from figures like Elon Musk, who has described a future in which automation could eliminate most jobs while enabling widespread access to goods and services through mechanisms such as a universal high income.

Yet even as such visions suggest abundance, Hinton has continued to emphasize the unresolved risks. He has estimated a 10 to 20 percent chance that advanced AI could pose an existential threat to humanity, underscoring the uncertainty surrounding the technology’s long-term trajectory.

Having left his role at Google in 2023 to speak more openly about these concerns, Hinton said his central fear remains the inability to prevent harmful uses of AI by those intent on exploiting it. In his view, the trajectory of artificial intelligence will depend not only on technical progress, but on whether its development can be aligned with safeguards capable of keeping pace.

Stay Connected