On July 30, 2025, the Italian Data Protection Authority (“Garante”) released a statement addressing the risks of using AI to interpret medical data. In this statement, the Garante recognizes the growing trend of individuals uploading medical analyses, X-rays, and other reports onto generative artificial intelligence platforms to obtain interpretations and diagnoses. It warns users of these AI services to carefully evaluate the implications of sharing health-related data with AI providers and relying on automatically generated responses.
The Garante highlights the risks to the health data involved and the dangers inherent in using AI solutions that are not qualified as medical devices, as these AI systems do not undergo checks by regulators to ensure their medical safety.
Focusing on data protection, the Garante recommends users to review AI providers’ privacy policies to understand whether their uploaded medical data will be deleted after the interpretation request or stored and used to train the AI algorithms. Furthermore, the Garante underscores the importance of qualified human oversight (e.g., by a doctor) in processing health data through AI systems. The AI Act will require human oversight for high-risk AI systems, such as AI systems that qualify as medical devices. According to the Garante, this oversight is vital to mitigate potential health risks and must be present throughout each phase of the AI system lifecycle—from development and training to testing and validation—before these systems are released to the market.
* * *
Covington’s Data Privacy and Cybersecurity Team regularly advises clients on the laws surrounding AI and continues to monitor developments in the field of AI.
(This blog post was written with the contributions of Alberto Vogel).