Navigating Simplification Without Sacrificing Safeguards: Key Takeaways

As the EU begins the complex task of making the European Artificial Intelligence Act[1] (the “AI Act”) workable in real life, the European Commission’s Proposal for a Regulation amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (the “Proposal”) aims to smooth the path. The EDPB and EDPS agree that simplification is needed, but their new Joint Opinion 1/2026 (the “Opinion”) makes one thing clear from the outset: convenience cannot come at the cost of fundamental rights. Their analysis offers an early glimpse of the pressure points that will define AI governance in 2026.

A push for simplification – but not at the expense of rights

The Proposal seeks to reduce some of the early operational challenges arising under the AI Act. The EDPB and EDPS recognise the value in easing administrative strain but emphasise that the AI Act already contains tailored mechanisms to balance innovation with rights protection. Their concern is that several of the proposed adjustments risk weakening this balance if they are not drafted narrowly. In their view, simplification should support implementation, not dilute transparency or accountability.

Sensitive data and bias correction

Under the current AI Act, special category data can only be used for bias detection and correction in high-risk systems and only where it is strictly necessary. The Proposal would extend this to all AI systems and models and lower the threshold to “necessary” or “necessary and proportionate”. The EDPB and EDPS warn that this risks undermining a core safeguard in the GDPR, even if bias in non-high risk systems can also have harmful effects. They advise reinstating strict necessity, limiting the derogation to situations involving real risks to rights and freedoms, and including examples in the recitals to guide interpretation. They also stress that data protection supervisory authorities (“DPAs”) should remain responsible for supervising any sensitive data processing carried out under this provision.

Registration duties for high-risk AI systems

The AI Act currently requires providers to register all Annex III systems in the EU high-risk database, even where a system is assessed as non-high risk under Article 6(3) AI Act. This supports transparency and gives deployers and regulators insight into how providers classify risk. The Proposal would remove this obligation for systems self-classified as non-high risk. The EDPB and EDPS strongly oppose this change, warning that it would reduce visibility, weaken accountability and incentivise overly optimistic self‑assessment, while achieving only marginal administrative savings.

EU-level AI regulatory sandboxes

EU-level sandboxes are broadly welcomed to support innovation, but the Opinion highlights several gaps that must be addressed. Unlike national sandboxes, the Proposal does not explicitly require DPA involvement where personal data is processed. The EDPB and EDPS warn that this creates uncertainty around supervision and could limit DPA powers in practice. They call for mandatory DPA participation, clearer rules on competence in cross border situations and an advisory role for the EDPB to ensure consistency. They also recommend granting the EDPB observer status on the European Artificial Intelligence Board and clarifying the distinction between sandboxes overseen by the AI Office and those overseen by the EDPS for Union institutions.

Supervision and enforcement

The Proposal extends the AI Office’s exclusive competence for monitoring and supervising the compliance of AI systems to include systems integrated into very large online platforms and search engines (VLOPs/VLOSEs). The EDPB and EDPS support this centralisation at the EU level but warn that the cooperation obligation may not be sufficient to guarantee the ability of national competent authorities to act if the AI Office is slow or unwilling. They further emphasise that close cooperation between the AI Office and national data protection authorities is essential when AI systems pose privacy risks. With respect to the AI Office’s scope of competence, the EDPB and EDPS recommend clearly delimitating the types of general-purpose AI models that fall under its exclusive supervision and explicitly excluding the supervision of AI systems developed or used by EU institutions.

Improved cooperation between MSAs and APFRs

The EDPB and EDPS support the Proposal’s objective of improving cooperation between fundamental rights authorities (“APFRs“) and market surveillance authorities (“MSAs“). However, they raise several important reservations to ensure that the use of MSAs as intermediaries does not undermine the efficiency of the procedure. They emphasise that the role of MSAs should be strictly limited to that of an administrative point of contact, without engaging in any assessment of the necessity and proportionality of APFRs’ requests. Furthermore, they stress that they must provide the requested information without undue delay and that routing requests through MSAs must not affect the independence and the existing powers of DPAs.

AI Literacy

The AI Act currently imposes an obligation on providers and deployers of AI systems to ensure that their staff possess sufficient AI literacy. The Opinion recommends maintaining this obligation, potentially complemented in parallel by a new obligation for the European Commission and the Member States to encourage providers and deployers to take measures that promote sufficient AI literacy and to offer guidance on how to implement this in practice. In any event, this new obligation should not replace the existing obligation of the AI Act.

Delay of implementation timeline

The Proposal postpones the application of high-risk AI rules due to the lack of harmonised standards and delays in designating national competent authorities and conformity assessment bodies. As a result, the implementation of Annex III high-risk AI systems is delayed from 2 August 2026 to no later than 2 December 2027 and the implementation of Annex I high-risk AI systems is postponed from 2 August 2027 to 2 August 2028. The EDPB and EDPS express concern that such delays may harm fundamental rights and undermine legal certainty in this fast-evolving AI landscape. They advocate for maintaining the current timeline, and if this proves unfeasible, for minimising any delay in the extent possible.

The Opinion recognises that the Proposal faces a delicate balancing act: simplifying rules without eroding trust or core rights. It is expected that the Proposal will change as it is debated among the European Commission, the European Parliament, and the EU Council during the trialogue process in 2026 and we will continue to monitor developments.


[1] Regulation (EU) 2024/1689