Who Controls the Data? DeepSeek’s Global Trust Problem

Italy Presses DeepSeek to Tame AI Hallucinations Under Tough Rules

The420 Correspondent
6 Min Read

When Italy’s competition watchdog scrutinized the answers produced by DeepSeek’s artificial intelligence chatbot, the concern was not merely about errors. It was about something more elusive: hallucinations—confident, fluent responses that are factually wrong, misleading or entirely fabricated.

Now, in a move that underscores Italy’s emerging role as one of Europe’s most assertive AI regulators, DeepSeek has agreed to develop its first chatbot version tailored exclusively to a single country. The Italy-specific model, the company says, will be adapted to national legal and regulatory requirements—provided it can satisfy the authorities overseeing its return.

Final Call: FCRF Opens Last Registration Window for GRC and DPO Certifications

The announcement comes amid a wider push by AGCM, Italy’s antitrust and consumer protection watchdog, to impose stricter standards on how artificial intelligence systems disclose risks, handle data and present information to users.

A Watchdog Tightens Its Net Around AI

Italy has emerged as one of the European Union’s most proactive—and punitive—jurisdictions when it comes to regulating digital platforms. The AGCM has repeatedly targeted global technology companies, including Meta and Google, with investigations that have often ended in fines or compliance mandates.

Artificial intelligence has become the next frontier. Unlike earlier cases involving advertising dominance or piracy-linked streaming services, AI oversight poses subtler challenges. Chatbots do not simply index the web like traditional search engines; they synthesize information from vast data sources, often producing authoritative-sounding responses that blur the line between retrieval and invention.

In its engagement with DeepSeek, the AGCM acknowledged a basic reality of the technology: hallucinations cannot be eliminated entirely. But the regulator insisted that their risks must be clearly disclosed and actively mitigated. DeepSeek, for its part, committed to making its warnings about hallucinations “more transparent, intelligible and immediate”—language that mirrors the watchdog’s own formulation.

A National Chatbot, With Conditions

DeepSeek’s proposal is unusual in its scope. Rather than offering a single global product, the company has agreed to create a chatbot calibrated specifically to Italian law, norms and regulatory expectations. That includes changes to user interfaces, terms and conditions, and internal governance structures.

To formalize these commitments, DeepSeek will submit a detailed report to the AGCM. Failure to comply could trigger fines of up to €10 million, or about $11.7 million, according to Italian authorities.

Industry observers note that not all obligations carry the same weight. Fang Liang, a spokesperson for Concordia AI, an independent research group that tracks global AI governance, said interface changes and legal disclosures are relatively straightforward. Technical promises—such as meaningfully reducing hallucinations—are far harder to verify or guarantee.

“How you measure fewer hallucinations is the unresolved question,” Liang said, pointing out that even leading models struggle with the same phenomenon.

Hallucinations, by Design

The problem is not unique to DeepSeek. Researchers at companies like OpenAI have publicly acknowledged that current training methods often reward confident answers, even when a model lacks certainty. Admitting “I don’t know” remains a difficult behavior to instill in systems optimized for fluency.

Italian regulators appear less interested in eliminating hallucinations outright than in forcing companies to confront them openly. The AGCM has emphasized that users must immediately understand when they are interacting with probabilistic outputs rather than verified facts—especially when chatbots are used for research, legal queries or news-related searches.

That stance reflects a deeper concern: as AI tools increasingly resemble search engines, they may fall under additional layers of European law.

The Question of What Counts as a Search Engine

DeepSeek’s future in Italy may hinge on classification as much as compliance. Under the European Union’s Digital Services Act, search engines face heightened obligations around transparency, risk mitigation and systemic oversight.

Traditionally, that label applied to companies like Google or Yahoo. AI chatbots complicate the definition. By scraping, summarizing and recombining information from across the web, they can effectively function as search engines—without presenting sources in conventional ways.

The stakes are significant. DeepSeek’s chatbot was removed from Italian app stores in January last year after data-handling concerns surfaced. A return now depends not only on satisfying the AGCM’s demands on hallucinations, but also on whether regulators ultimately decide that the service falls within the DSA’s search-engine framework.

For Italy, the case offers a test run of how national regulators might shape global AI products. For DeepSeek—and other AI developers watching closely—it is a reminder that, in Europe at least, innovation is increasingly negotiated line by line with the law.

About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.

Stay Connected