During the session of September 17, 2025, the Italian Senate approved Bill No. 1146-B, entitled “Provisions and delegated powers to the Government regarding artificial intelligence.” The text was published on the Italian Official Journal as Law No. 132 of 2025 and is set to become applicable starting from October 10, 2025.
The new law (the “Italian AI Law”), divided into 28 articles split into six chapters, outlines fundamental principles, areas of application, national strategies, protection of rights, responsibilities, and international cooperation in the field of artificial intelligence (“AI”). It does not introduce new obligations with respect to Regulation (EU) 2024/1689 (“AI Act”), but rather complementary provisions based on the requirements delegated to Member States by the AI Act itself and to account for the specificities of the national context.
The Senate’s final approval upheld the amendments introduced by the Chamber of Deputies in June 2025, which had eased certain technical constraints and redefined control mechanisms, while introducing clarifications and additions aimed at strengthening consistency with the European framework and expanding democratic and social guarantees. The conclusion of the legislative process therefore hands the country its first national regulatory framework on artificial intelligence: an important framework, but one that, as we will see, still needs to be filled in.
Content of the Italian AI Law
The Italian AI Law is inspired by the guiding principles of the AI Act, namely the centrality of fundamental rights, proportionality of rules, safety and transparency, but differs in its approach. While the European regulation adopts a cross-cutting approach based on risk levels applicable to all sectors (with a few sectoral exceptions, for example with regard to certain high-risk AI systems in the financial and insurance sectors), Italian rules focus on specific areas considered to be of particular relevance:
- Healthcare: Article 7 expressly prohibits the use of AI systems to select or condition access to healthcare services, imposes an obligation to inform patients about the use of AI technologies, and requires continuous performance measurement to minimize the risk of errors; at the same time, it reiterates that artificial intelligence must remain a mere support to prevention, diagnosis, and treatment, leaving the doctor with the ultimate responsibility for clinical decisions and the burden of monitoring and verifying the outputs of the systems.
- Scientific research: Article 8 classifies the development of AI systems in the healthcare sector by non-profit private entities or IRCCS (Scientific Institutes for Research, Hospitalization and Healthcare) – as well as by public and private actors in partnership with such entities – as being of significant public interest, pursuant to Article 9(2)(g) of the GDPR. This allows for the processing of personal data without the consent of the data subjects. However, the lawfulness of the processing remains subject to prior notification to the Data Protection Authority. Article 8 also allows, subject to notification to the data subjects, the processing of personal data – including sensitive data – for the purposes of anonymization, pseudonymization, or synthesization for scientific research activities, also in the field of sports, in compliance with the general principles of the law and the economic rights of the organizers of competitive activities.
- Work: Article 11 requires employers to adequately inform workers about the use of AI systems. In doing so, the provision is in line with the rules already defined by Italian legislation on the remote monitoring of workers, but establishes a broader information obligation than that provided for in Article 26(7) of the AI Act, which applies only to high-risk AI systems. In addition, Article 12 provides for the establishment of a National Observatory tasked with monitoring the employment impacts of AI, developing regulatory strategies, and identifying the sectors most affected by digital transformation.
- Learned professions: Article 13 prohibits professionals from entrusting their work entirely to an AI system and imposes a clear and comprehensible obligation to provide information on the use of this technology. However, significant questions remain regarding the distinction between purely auxiliary and prevalent use, as well as the consequences of violations (which are already emerging in case law, for example, with a recent ruling by the Court of Turin that classified the filing of an unfounded appeal formulated with the use of AI without control and review by a lawyer as “lite temeraria”, i.e., reckless litigation).
- Justice and public administration: Articles 14 and 15 establish that AI can only operate as a support tool and cannot replace the assessment and decision-making of human operators, while at the same time they promote training programs aimed at developing the digital skills necessary for the responsible use of technologies.
- Copyright: Article 25 explicitly extends copyright protection to works created with the aid of AI systems, provided that they are the result of the ‘intellectual work’ of the human author. This, however, leaves the definition of the criteria for the prevalence of human input over that of the AI system unresolved. The provision in question also allows reproduction and extraction (text and data mining) from lawfully accessible materials for the purpose of training AI models and systems, in line with the provisions of Articles 70-ter and 70-quater of the Italian Copyright Law.
- Procedural and criminal law: the legislator assigns exclusive jurisdiction to the courts for disputes relating to the functioning of AI systems (amendment to Article 9 of the Code of Civil Procedure), while introducing specific aggravating circumstances into the Criminal Code relating to the use of AI in the commission of crimes and regulating a new criminal offense aimed at punishing the unlawful dissemination of deepfakes.
As regards the competent authorities in the field of AI, Italy has designated:
- the Italian Digital Agency (AgID), as the notifying authority with the task of accrediting and monitoring the bodies responsible for verifying the compliance of AI systems; and
- the National Cybersecurity Agency (ACN), with the role of supervisory authority and, therefore, of enforcing AI regulations and imposing sanctions in the event of violations, as well as the leading authority for the use of AI for cybersecurity.
In the management of AI, the Bank of Italy, CONSOB, and IVASS maintain a sectoral supervisory role for the credit, financial, and insurance sectors. The powers of the Italian Data Protection Authority and those of AGCOM as Coordinator of Digital Services under the Digital Services Act also remain unaffected. However, the designation of AgID and ACN, which are government authorities, appears to overlook the requirement of independence already highlighted in the opinion on the first draft of the AI Bill issued by the European Commission in November 2024 (C(2024) 7814). Leveraging the requirements of independence and the interrelationship between AI and data protection, including automated decision-making processes, in the spring of 2024, the Italian Data Protection Authority had written to Parliament and the Government to urge them to reconsider their choice.
Finally, Article 19 of the Italian AI Law provides for the establishment of a Coordination Committee, tasked with supporting the Government in the development and implementation of the AI strategy, coordinating public and private initiatives, and ensuring an integrated approach between ministries, research institutions, and industry stakeholders.
A first in the EU, but with a yet incomplete framework
Although Italy is the first European country to adopt a national law in line with the AI Act, the overall regulatory framework remains far from complete. Numerous provisions of the Italian AI Law remain suspended, pending the issuance of decrees, regulations, or guidelines that define the specific implementation methods:
- In the healthcare sector, the Ministry of Health will have to issue two decrees: one to regulate trials of projects based on AI and machine learning, with the creation of “testing grounds,” and the other to define the conditions for the use of AI systems in the healthcare sector. At the same time, AGENAS will be able to draw up guidelines for procedures for anonymizing personal data and creating synthetic data for scientific research purposes.
- In the labor sector, the establishment of the National AI Observatory remains subject to a decree by the Ministry of Labor, which will have to define its functions, tools, and methods of interaction with social stakeholders.
- Article 16 entrusts the Government with the adoption of legislative decrees to establish criteria on the data and algorithms used to train AI systems, a central and sensitive issue that will require the involvement of ministries, independent authorities, and parliamentary procedures. The issue is complex, as the training of AI systems has various legal implications, including in relation to the AI Act, intellectual property, privacy, and sector-specific regulations. Therefore, future interventions aimed at regulating this area will have to contend with a complex regulatory framework, taking care not to upset the balance.
- Article 19 delegates to the Government the preparation of the National Strategy for AI, intended to coordinate public and private initiatives between the State, businesses, universities, and research centers. We previously covered the Italian Strategy for AI in this article (available in Italian).
- Finally, Article 24 grants the Government a broad set of powers to adopt legislative decrees in various areas, including the regulation of AI use in the financial and insurance sectors, the adaptation of public administration, criteria for the adoption of AI systems, and the regulation of sanctions.
Conclusions
While it is true that the Italian AI Law is formally in line with the AI Act, as it does not introduce any new obligations or divergent definitions, it is still insufficient to ensure full alignment of the national framework with the European regulation. Effective harmonization will depend on the adoption of decrees and guidelines, a complex process that is likely to take a long time.
In the meantime, several provisions of the AI Act are already applicable immediately, and others will become operational in 2026, creating inevitable uncertainty for public and private operators, who are forced to navigate a dual regulatory environment. Ultimately, in the coming months, businesses and operators will have to focus primarily on the AI Act, where deadlines and obligations are already certain, while closely monitoring the evolution of Italian legislation to understand when and how these additional rules will become operational.
