Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Digital Omnibus Package Series: European Commission’s Proposal to Revise the EU’s AI Rules

By Covington Privacy Team on November 21, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On November 19, 2025, the European Commission (“Commission”) officially presented its Digital Omnibus Package (see here and here). In our previous blog post (see here), we explained that this initiative, which represents a comprehensive update to the EU’s digital regulatory landscape, consisted of two proposed regulations: a “Digital Omnibus” that would amend, amongst other legislation, the General Data Protection Regulation (GDPR), ePrivacy Directive, NIS2 Directive and Data Act, and a “Digital Omnibus on AI” that would amend the EU AI Act.

Our previous blog post highlighted some key amendments proposed by the Digital Omnibus.  This blog focuses on key proposals from the Digital Omnibus on AI.

  • Shifted AI literacy obligations

In its current version, the EU AI Act imposes an obligation on all providers and deployers of AI systems to take measures to ensure “a sufficient level of AI literacy” for their staff. Acknowledging this approach may be challenging to implement in practice, especially for smaller companies, and viewing AI literacy as a “strategic priority”, the Commission proposes to delete this obligation and instead, require that Member States and the Commission encourage providers and deployers to take such measures.

  • Extended legal basis for processing sensitive data for bias mitigation

While the EU AI Act currently only mentions that sensitive data may be processed (under certain conditions) to ensure bias detection and correction in relation to high-risk AI systems, the proposal expands this possibility to other actors and AI systems, as well as to AI models.

  • Simplified requirements for AI systems qualifying for the Article 6(3) derogation

Article 6(3) of the AI Act states that an AI system will not be considered high risk, even if it falls within scope of Annex III, if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.” While Article 6(4) of the existing AI Act requires providers of such systems to register themselves and these systems in an EU database of high-risk AI systems, the proposed regulation would eliminate this registration requirement.

  • Simplified rules for SMEs and SMCs

In an effort to enable a smooth transition of micro, small and medium-sized enterprises (“SMEs”) into small mid-cap enterprises (“SMCs”), the Commission proposes to expand some of the benefits granted under the EU AI Act to SMEs and thus reduce the compliance burden imposed on SMCs. For clarity, the proposal would also introduce definitions of SMEs and SMCs, which are aligned with previous Commission recommendations.

The Commission also proposes that all SMEs should benefit from a simplified way to comply with the obligation to establish a quality management system, and not only microenterprises as is currently the case under the EU AI Act, and that both SMEs and SMCs only need to implement the quality management system in a manner proportionate to the size of their organization.

  • Clarified rules on conformity assessments

The existing AI Act imposes different conformity assessment rules for AI systems that fall within scope of Annex I.A of the Act, and those that fall within scope of Annex III. The Commission proposal clarifies that, where a single AI system falls within scope of both Annex I.A and Annex III, the provider must follow the conformity assessment rules that apply by virtue of falling within scope of Annex I.A.

  • Modified rules for AI regulatory sandboxes

The proposal introduces the possibility for the Commission’s AI Office to introduce an EU-level AI regulatory sandbox for certain AI systems and requires Member States to strengthen cross-border cooperation on their sandboxes. 

The Commission would also be allowed to adopt implementing acts detailing arrangements “for the establishment, development, implementation, operation, governance and supervision of AI regulatory sandboxes”.

Real-world testing rules for high-risk AI systems would also be amended, inter alia to extend the testing opportunity to high-risk AI systems covered by Annex I (whereas it is currently limited to high-risk AI systems listed in Annex III).

  • Clarified Supervision and Enforcement System

The proposal includes several provisions aiming at clarifying the role of the AI Office as well as expanding its supervision and enforcement powers. In particular, it clarifies that the AI Office would have exclusive competence for the supervision and enforcement of Annex III AI systems that are based on general-purpose AI models, where that model and system are developed by the same provider, as well as for AI systems that constitute or are integrated into a designated very large online platform or very large online search engine within the meaning of the Digital Services Act. The Commission also would be required to undertake pre-marketing conformity assessments of any such systems that are classified as high risk and are subject to third-party conformity assessment pursuant to Article 43 of the Act. These amendments mark a significant change from the existing AI Act, which arguably gives Member State market surveillance authorities shared competence over the supervision and enforcement of such systems.

  • Updated Timelines

Importantly, the Commission proposes to amend the timeline for the entry into force of certain provisions.  This would be the case in particular for obligations related to high-risk AI systems.  The idea in that case is to link the implementation timeline to the availability of harmonized EU standards or other support tools (to be confirmed by way of a Commission decision), so as to provide some flexibility.  Having said that, and in the absence of the adoption of a Commission decision imposing an earlier date of application, the rules applicable to high-risk AI systems would start applying as of December 2, 2027 for  such systems covered by Annex III, and as of August 2, 2028 for those systems covered by Annex I. Likewise, the Commission proposes to amend Article 111(2) to state that the AI Act will apply to operators of high-risk AI systems placed on the market or put into service before these dates only if, as of those dates, these systems are subject to significant changes in their design.

Another example of the updated timeline is the proposal to push back the application of transparency obligation in Article 50(2) of the EU AI Act to February 2, 2027, which applies to providers of AI systems (including general-purpose AI systems) generating synthetic audio, image, video or text content, to the extent their AI system has been placed on the market before August 2, 2026.

By way of reminder, the final text of the Digital Omnibus on AI is likely to evolve during negotiations with the European Parliament and the Council of the EU (“Council”). The Covington team will continue to monitor this proposal, and report back should there be any notifiable amendments.

**

The Covington team regularly advises the world’s top companies on their most challenging technology regulatory, compliance, and public policy issues in the EU and other major markets. Please reach out to a member of the team if you need any assistance.

  • Posted in:
    Privacy & Data Security
  • Blog:
    Inside Privacy
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo