Washington: The United States used Anthropic’s artificial-intelligence model Claude during a covert operation that led to the capture of Venezuelan leader Nicolás Maduro, according to a report by the Wall Street Journal citing people familiar with the matter.
The AI deployment reportedly took place through Anthropic’s partnership with data analytics firm Palantir Technologies, whose platforms are widely used by the United States Department of Defense and federal law-enforcement agencies.
Maduro was seized in an audacious early-January raid and flown to New York to face drug-trafficking charges, marking one of the most dramatic U.S. operations against a sitting Latin American leader in recent years. The episode has now triggered a wider debate over how far commercial AI tools are being integrated into sensitive military and intelligence missions.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
Neither the Pentagon nor the White House immediately responded to requests for comment. Anthropic and Palantir also declined to issue statements on the report.
According to people cited by the Journal, Claude was made available to U.S. operators via Palantir’s government platforms, which already run on both classified and unclassified networks. While several technology companies are developing bespoke AI tools for defence clients, Anthropic is currently the only major AI firm whose model is accessible in classified environments through third-party integrations.
The development comes amid an aggressive push by the Pentagon to onboard leading AI companies — including OpenAI — onto secure government systems. Defence officials are seeking fewer commercial restrictions on how such models can be used, arguing that AI is becoming central to planning, logistics and battlefield intelligence.
Anthropic’s own usage policies formally prohibit employing Claude to support violence, design weapons or conduct surveillance. Yet defence officials maintain that government access remains governed by contractual safeguards, even as operational requirements evolve.
Industry executives say most AI deployments for the military currently focus on administrative workflows, data fusion and threat assessment rather than direct combat support. Still, the reported use of Claude in the Maduro operation suggests a rapid expansion of AI’s role in real-world security missions.
The controversy is amplified by Anthropic’s soaring valuation. The San Francisco–based firm recently closed a massive funding round that lifted its valuation to about $380 billion, underscoring the strategic importance Washington now places on private AI developers.
Security analysts note that Palantir has long served as a bridge between Silicon Valley and U.S. defence agencies, providing platforms that integrate satellite imagery, communications intercepts and field intelligence. The addition of large language models like Claude is seen as a force multiplier — enabling faster synthesis of complex datasets and real-time decision support.
However, civil-liberties advocates warn that embedding generative AI into military systems risks weakening transparency and accountability, especially when proprietary models are involved. They argue that clearer oversight frameworks are needed before such tools become standard components of covert operations.
For now, the U.S. government has offered no public clarification on precisely how Claude was used during the Maduro raid. But the episode highlights a pivotal shift: artificial intelligence is no longer confined to back-office analytics — it is moving directly into the operational core of national security.
With Washington accelerating efforts to integrate commercial AI across defence networks, the Maduro capture may be remembered as a turning point in how modern conflicts and high-stakes law-enforcement actions are planned and executed.
