Palantir CEO Alex Karp acknowledged the company continues using Anthropic’s Claude AI models even after the Pentagon labeled the firm a supply-chain risk, highlighting how deeply artificial intelligence tools are embedded in U.S. defense technology systems.

Palantir CEO Says Claude AI Still Used Despite Pentagon Risk Label

The420 Correspondent
6 Min Read

Washington | When the United States Department of Defense labeled the artificial intelligence company Anthropic a potential supply-chain risk last week, the move signaled a rare and consequential step in Washington’s rapidly intensifying scrutiny of emerging technology vendors.

Yet even as the Pentagon moves to phase out the company’s flagship Claude language model from defense-related work, one of the government’s most prominent technology contractors says the transition may be far from immediate.

Speaking at Palantir’s AIPcon 9 conference in Maryland, the company’s chief executive, Alex Karp, acknowledged that Palantir’s systems remain integrated with Anthropic’s technology — a reality that underscores the complicated relationship between cutting-edge artificial intelligence platforms and national security infrastructure.

“The Department of War is planning to phase out Anthropic; currently, it’s not phased out,” Karp said in remarks reported by CNBC. “Our products are integrated with Anthropic, and in the future it will probably be integrated with other large language models.”

The comments highlight a broader dilemma facing U.S. defense agencies: how to disengage from widely used AI systems without disrupting ongoing military operations.

Algoritha Security Emerges As India’s Leading Corporate Investigation Powerhouse

Pentagon Labels Anthropic a Supply Chain Risk

The controversy began when the U.S. Department of Defense formally designated Anthropic as a supply-chain risk — a classification typically reserved for companies linked to foreign adversaries or those deemed vulnerable to security compromise.

The designation requires defense contractors and vendors working on Pentagon-related programs to certify that they are not using Anthropic’s Claude models in systems tied to military operations.

The move represents one of the most serious regulatory actions taken against a major artificial intelligence developer in the United States.

Officials have not publicly detailed the specific concerns that led to the designation. But the decision reflects mounting anxiety within the national security community over the reliance on rapidly evolving AI platforms in sensitive government systems.

Despite the directive, however, reports indicate that Claude models remain embedded in certain defense-related technological frameworks.

According to CNBC, the AI system is still being used in support of U.S. military operations connected to Iran, underscoring how difficult it can be to remove foundational software components once they are integrated into complex digital infrastructure.

Palantir’s Systems Still Tied to Claude

Palantir, one of the Pentagon’s most influential data analytics and software contractors, has built a number of its AI-enabled products using large language models.

Karp’s remarks suggest that Anthropic’s Claude models currently form part of those systems, even as federal authorities push toward removing the technology.

Palantir’s platforms are widely used across U.S. defense and intelligence agencies for tasks ranging from battlefield data analysis to intelligence processing and operational planning.

Replacing a core AI component within such systems can involve extensive reengineering, testing and security validation — processes that can take months or even years.

Karp’s comments indicate that while Anthropic may eventually be replaced with other language models, the transition is unlikely to be immediate.

“Our products are integrated with Anthropic,” he said, noting that the systems could later incorporate other large language models as alternatives become available.

Anthropic Challenges the Government’s Decision

Anthropic has responded aggressively to the Pentagon’s designation.

The company has filed a lawsuit against the Trump administration, arguing that the supply-chain risk label is both “unprecedented and unlawful.”

In court filings, the company contends that the government’s action threatens hundreds of millions of dollars in contracts, potentially damaging its business relationships with both private and public sector clients.

Anthropic is seeking a judicial stay that would temporarily halt the Pentagon’s restrictions while the case is reviewed.

The lawsuit introduces a new legal dimension to what is already becoming a defining policy debate over the role of artificial intelligence companies in national security systems.

A Difficult Transition for the Defense Department

Even officials within the Pentagon acknowledge that removing Anthropic’s technology will not be simple.

Defense Department Chief Technology Officer Emil Michael told CNBC that disentangling the AI systems from defense infrastructure will require time.

“You can’t just rip out a system that’s deeply embedded overnight,” he said.

President Donald Trump has announced that federal agencies will have six months to phase out Anthropic’s products, though internal Pentagon guidance suggests that exemptions may be granted if the systems are considered mission-critical and no viable alternatives exist.

That possibility reflects a growing recognition within the defense community that modern military operations increasingly rely on interconnected software platforms built around advanced artificial intelligence.

The situation surrounding Palantir and Anthropic illustrates how difficult it may be for governments to regulate technologies that evolve faster than the systems designed to oversee them.

As defense agencies navigate the transition, officials and contractors alike appear to be grappling with a central question: how to balance technological innovation with the security demands of national defense in an era increasingly defined by artificial intelligence.

About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.

Stay Connected