Italian Antitrust Investigates DeepSeek for Lack of Transparency on AI Risks | Large language models course free download | Hands-on large language models pdf | Best llm training dataset pdf | Turtles AI

Italian Antitrust Investigates DeepSeek for Lack of Transparency on AI Risks
The Chinese startup is under scrutiny for failing to properly inform users about the possibility that its chatbot could generate incorrect or misleading content. Doubts also surround its handling of personal data
Isabella V17 June 2025

 

The Italian Competition Authority has launched an investigation into DeepSeek for failing to provide adequate warnings about the possibility of generating incorrect information (“hallucinations”). In February, the Italian Data Protection Authority had already blocked the chatbot for shortcomings in its privacy policy.

Key points:

  • The Italian Data Protection Authority accuses DeepSeek of not providing “sufficiently clear, immediate and comprehensible” warnings about the risk of hallucinations.
  • Hallucinations occur when the system generates inaccurate, misleading or invented content.
  • In February, the Italian Data Protection Authority had ordered the chatbot to be blocked due to critical issues in its privacy policy.
  • The investigation is part of a European context of growing scrutiny of AI models by Italy, France, Belgium and the Netherlands.


The Italian Competition Authority (AGCM) has officially launched an investigation into DeepSeek, a Chinese startup active in the AI ​​sector, suspected of failing to provide useful warnings to users about the possibility that its system may generate incorrect statements. These are the so-called “hallucinations”, or errors or fantasies that the algorithm could present as facts, in the absence of evidence or verifiable sources. In the press release, the AGCM highlighted how the warnings provided by DeepSeek are not “sufficiently clear, immediate and comprehensible”, an element considered essential from the point of view of consumer protection.

This phase of the investigation follows a decision already taken last February by the Italian Authority for the protection of personal data, which had ordered the blocking of the chatbot service due to serious gaps in the privacy policy: among these, the lack of clarity on the type of data collected, their purpose and their location on servers located in China. DeepSeek had also claimed that it was not subject to European legislation, a statement that had worsened the position towards the Authority.

Also used in other countries of the European Union, such as France, Belgium, Ireland and the Netherlands, the supervision of AI services is intensifying, in particular to counter risks related to privacy, disinformation and non-transparent algorithms. In parallel, academic analyses are emerging – such as a recent investigation that shows how DeepSeek models exclude sensitive content in the output while maintaining it in the internal reasoning chain – that suggest a form of semantic censorship functional to moderation, but that raise questions about the transparency and completeness of the information returned.

At the moment, no measures or sanctions have been defined by the AGCM, nor has DeepSeek provided official answers regarding the opening of the investigation.

The proceedings opened by the AGCM represent the umpteenth step in the regulatory path undertaken in Europe towards AI, which aims to guarantee citizens correct, transparent information that complies with data protection rights.