On July 10, 2025, the AI Office published the final version of the Code of Practice for General-Purpose AI Models (the “Code”).  The Code is a voluntary compliance tool designed to help companies comply with the AI Act obligations for providers of general-purpose AI (“GPAI”) models.  The AI Office and the AI Board will now assess the Code and may approve it via an adequacy decision.  Once approved, the European Commission is expected to formally adopt the Code via an implementing act.

The Code details how providers of GPAI models may comply with their obligations under the AI Act.  It comprises three chapters, each covering different aspects of AI Act compliance: (i) transparency, (ii) copyright, and (iii) safety and security.  The first two chapters apply to all providers of GPAI models, while the third addresses obligations for providers of GPAI models with systemic risk.  By adhering to the Code, signatories agree to implement their AI practices in accordance with the commitments contained in the Code.

I.                   Transparency

This chapter relates to the documentation that providers of GPAI models must maintain to comply with Article 53(1), points (a) and (b), of the AI Act.  It includes three measures that, at a high level, require:

  1. Drawing up and keeping up-to-date model documentation to provide to the AI office on request, and to make available to downstream providers (i.e., providers of AI systems that build upon the GPAI model).  Signatories may, but are not required to, provide this information via the Model Documentation Form that is included in the Code.  
  2. Providing relevant information to downstream providers and to the AI Office if requested.  This commitment involves publicly sharing the GPAI model provider’s contact details so the AI Office and downstream providers can request access to information; and
  3. Ensuring the quality, integrity, and security of information. Signatories must ensure that the documented information is “controlled for quality and integrity, retained as evidence of compliance with obligations in the AI Act, and protected from unintended alterations.

II.                Copyright

This chapter addresses compliance with the requirements under Article 53(1)(c) of the AI Act, which requires providers of GPAI models to establish a policy to comply with EU copyright law, particularly in relation to copyright holders’ rights to reserve the use of their works.  However, compliance with this chapter of the Code does not guarantee conformity with EU copyright law.  In this chapter, signatories of the Code commit to five measures:

  1. Draw up, keep up-to-date and implement a copyright policy. This policy must comply with EU law on copyright and related rights for all GPAI models that they place on the EU market;
  2. Reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web. This involves not circumventing effective “technological measures as defined in Article 6(3) of Directive 2001/29/EC that are designed to prevent or restrict unauthorized acts in respect of works and other protected subject matter, in particular by respecting any technological denial or restriction of access imposed by subscription models or paywalls.”  Signatories also commit to “exclude from their web-crawling websites that make available to the public content and which are, at the time of web-crawling, recognised as persistently and repeatedly infringing copyright and related rights on a commercial scale by courts or public authorities in the European Union and the European Economic Area”;
  3. Identify and comply with rights reservations when crawling the World Wide Web. For signatories that use web crawlers (i.e., web scraping tools), they must only “employ web-crawlers that read and follow instructions expressed in accordance with the Robot Exclusion Protocol (robots.txt) […] and any subsequent version of this Protocol for which the IETF demonstrates that it is technically teasible and implementable by AI providers and content providers, including rightsholders.” Signatories also commit to “identify and comply with other appropriate machine-readable protocols expressing rights reservations pursuant to Article 4(3) of Directive (EU) 2019/790”.  This commitment is without prejudice to the right of rightsholders to expressly reserve the use of works and other protected subject matter for the purposes of text and data mining pursuant to Article 4(3) of Directive (EU) 2019/790 in any appropriate manner.  Signatories also commit to providing information on the web crawlers used and measures taken to respect rights reservations to affected copyright holders
  4. Mitigate the risk of copyright-infringing outputs. This requires signatories to “implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner”, and to “prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents”; and
  5. Designate a point of contact and enable the lodging of complaints. Signatories must designate this point of contact “for electronic communication with affected rightsholders and provide easily accessible information about it.” They must also implement a complaint-handling mechanism through which affected rightsholders can “submit, by electronic means, sufficiently precise and adequately substantiated complaints concerning  the non-compliance of Signatories with their commitments pursuant to this Chapter and provide easily accessible information about it.”

Notably, the prior draft of the Code had contained a commitment for signatories to obtain adequate information about training content that the signatory had not directly obtained by web-crawling.  That commitment has been removed from the final Code.

III.             Safety and Security

This chapter addresses the obligations applicable to providers of GPAI models with systemic risks.  These are GPAI models that match or exceed the capabilities of the most advanced GPAI models, thus carrying greater risks.  At a high level, signatories of the Code that are providers of GPAI models with systemic risks commit to:

  1. Creating a “state-of-the-art Safety and Security Framework”.  This framework includes many of the commitments discussed below;
  2. Identifying the systemic risks stemming from the model;
  3. Analyzing each identified systemic risk;
  4. Specifying systemic risk acceptance criteria and determining whether the systemic risks stemming from the model are acceptable; and if the risks are not acceptable, deciding whether to continue with the development, market placement, or use of the GPAI model;
  5. Implementing appropriate safety mitigations to ensure the acceptability of systemic risks;
  6. Implementing adequate cybersecurity measures for the models and the physical infrastructure they rely on to ensure the acceptability of systemic risks arising from unauthorized access or releases and model theft;
  7. Creating a “Safety and Security Model Report” to inform the AI Office about the GPAI model and its systemic risk assessment and mitigation processes and measures;
  8. Defining clear responsibilities for managing systemic risks, allocating appropriate resources to individuals with responsibilities for managing systemic risk, and promoting a “healthy risk culture”;
  9. Implementing appropriate measures to ensure that information about serious incidents throughout the model lifecycle, along with adopted mitigating measures, are reported to the competent authorities;
  10. Documenting the implementation of the commitments under this chapter of the Code, and publishing a summarized version of the Safety and Security Framework and Model Reports as necessary.

This chapter of the Code has been shortened and simplified significantly from the last draft.  The last draft had included a number of additional commitments (e.g., for non-SME signatories to share best practices for state-of-the-art model evaluation and systemic risk assessment and mitigation; and for signatories to “advance research on and implement more stringent security mitigations” in line with the RAND SL4 security goal) which have not been included in the final version.

Next Steps

EU Member States and the European Commission will now assess the adequacy of the Code.  If deemed appropriate, they will approve it via an adequacy decision.  Subsequently, the European Commission may approve the Code via an implementing act, granting it general validity.  This implies that adhering to the Code will constitute a means of demonstrating compliance with the AI Act, though not providing a presumption of conformity (i.e., providers may also demonstrate compliance with the AI Act through other means).  Furthermore,  the European Commission notes in its Q&As that, until August 2, 2026, for signatories that “do not fully implement all commitments immediately after signing the Code, the AI Office will not consider them to have broken their commitments under the Code and will not reproach them for violating the AI Act.  Instead, in such cases, the AI Office will consider them to act in good faith and will be ready to collaborate to find ways to ensure full compliance.  

The Code is complemented by the European Commission’s guidelines for providers of GPAI models, which it also recently published this month.

Finally, the European Commission has also published its template for the summary of training data, which GPAI model providers must draft and make publicly available in accordance with Article 53(1)(d) of the AI Act.  The template is complementary to the Code and will likely be adopted alongside the Commission’s guidelines on GPAI models.

*          *          *

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

This blog post was written with the contribution of Alberto Vogel.

Stacy Young

Stacy Young is a trainee solicitor who attended the University of Law.

Dumitha Gunawardene

Dumitha Gunawardene is an associate in the Commercial Litigation Practice Group. His practice covers a broad range of complex commercial and contractual disputes and international commercial arbitrations. Dumitha has represented clients in the English High Court as well as in arbitrations under ICC…

Dumitha Gunawardene is an associate in the Commercial Litigation Practice Group. His practice covers a broad range of complex commercial and contractual disputes and international commercial arbitrations. Dumitha has represented clients in the English High Court as well as in arbitrations under ICC, LCIA and DIAC Rules.