Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

European Data Protection Authorities Issue Joint Opinion on the Digital Omnibus on AI

By Dan Cooper & Alix Bertrand on January 28, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

On January 20, 2026, the European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) (together, the “Authorities”) adopted Joint Opinion 1/2026 on the European Commission’s proposal to amend the EU AI Act (hereafter the “Proposal”, summarized in our previous blog). Overall, the Authorities acknowledge the complexity of the AI Act and agree that targeted simplifications can support legal certainty and efficient administration. However, they warn that simplification should not result in lowering the protection of fundamental rights, including data protection rights. This blog outlines some of the Authorities’ main recommendations as expressed in their Joint Opinion.

Clear boundaries for the processing of sensitive data for bias mitigation

The Proposal envisages expanding the legal basis under the AI Act to allow the processing of special categories of personal data for bias detection and correction in AI models and systems. The Authorities accept that some bias mitigation may require sensitive personal data, but insist such processing should meet a “strict necessity” threshold (not just a “necessary” threshold as proposed). They further recommend clearly circumscribing the use of this legal basis, and limiting such processing activity to cases where it would be justified in light of the risk of adverse effects arising from the processing of such data.

Maintaining registration and training obligations for certain AI providers and deployers

Although the Authorities generally support easing administrative burdens for companies subject to the AI Act, they object to the Proposal’s suggestion to remove registration requirements for AI systems that providers have determined fall outside the high‑risk classification (as per Article 6(3) AI Act). The Authorities flag the risks of differing interpretations and incorrect assessments, and express concerns that such a measure would reduce visibility for competent authorities or bodies over potentially high-risk AI systems, thus underpinning effective supervision and redress.

Similarly, the Authorities consider transforming the AI literacy obligation for providers and deployers into an obligation applicable to the Commission and Member States, obliging them to “encourage” providers and deployers to ensure AI literacy among staff and other persons dealing with the operation and use of AI systems on their behalf, would undermine the effectiveness of this requirement.

Reinforcing institutional coordination for EU‑level AI regulatory sandboxes

Although broadly supportive of EU-level AI regulatory sandboxes maintained by the Commission, the Authorities recommend clarifying that competent national data protection authorities should be involved in the operation of such EU-level sandboxes, along with the EDPB – who, according to the Authorities, should also be granted the status of observer on the European Artificial Intelligence Board. The Authorities also call for a clear distinction to be made between AI sandboxes for EU bodies set up by the EDPS, and EU-level AI sandboxes established by the Commission’s AI Office.

Clarifying rules on supervision and enforcement

With regard to supervision and enforcement mechanisms under the AI Act, the Authorities underline the need for strong cooperation among the various authorities and bodies that may be involved, including the AI Office, national market surveillance authorities, and national data protection authorities. They further highlight the need to clarify the competence and powers of these various stakeholders. Some of the requested clarifications concern, for instance, (i) the types of general-purpose AI systems that would trigger the AI Office’s exclusive competence, or (ii) the role of market surveillance authorities as a mere administrative point of contact.

Warning against postponing high‑risk obligations

Finally, the Authorities express unease regarding the proposed postponement of certain high‑risk AI system obligations. They point out that such delays would result in more high-risk AI systems remaining out of scope of the Act’s high-risk requirements (per Article 111(2) of the AI Act), as they would have been put on the market prior to entry into force of these provisions and thus exempted. While acknowledging implementation pressures, they invite legislators to consider:

  • maintaining the original timelines for obligations with direct rights‑protective effects, such as transparency;
  • limiting any delays in timelines to what is strictly necessary; and
  • avoiding prolonged legal uncertainty that could undermine both compliance planning and public trust.

*                            *                                  *

The Covington team regularly advises the world’s top companies on their most challenging technology regulatory, compliance, and public policy issues in the EU and other major markets. Please reach out to a member of the team if you need any assistance.

Photo of Dan Cooper Dan Cooper

Daniel Cooper heads up the firm’s growing Data Privacy and Cybersecurity practice in London, and counsels clients in the information technology, pharmaceutical research, sports and financial services industries, among others, on European and UK data protection, data retention and freedom of information laws…

Daniel Cooper heads up the firm’s growing Data Privacy and Cybersecurity practice in London, and counsels clients in the information technology, pharmaceutical research, sports and financial services industries, among others, on European and UK data protection, data retention and freedom of information laws, as well as associated information technology and e-commerce laws and regulations. Mr. Cooper also regularly counsels clients with respect to Internet-related liabilities under European and US laws. Mr. Cooper sits on the advisory boards of a number of privacy NGOs, privacy think tanks, and related bodies.

Read more about Dan Cooper
Show more Show less
  • Posted in:
    Privacy & Data Security
  • Blog:
    Inside Privacy
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo