The European Space Agency launched the James Webb Space Telescope on Christmas Day 2021 from its facility in French Guiana. A collaboration between NASA, CSA and ESA, the JWST’s launch could not have gone better – a perfect ballet of rocketry, automation and cutting -edge science. The James Webb continues the trend of space telescopes producing awe-inspiring images of the Universe started by the beloved Hubble Space Telescope.
But when Hubble launched, it had a problem…
A microscopic error in its primary mirror left the telescope short-sighted. The most ambitious orbital observatory in history to that point returned blurred images. Only after NASA undertook a risky shuttle mission to install corrective lenses did Hubble deliver the breathtaking clarity it had promised all along.
The EU’s digital rulebook is having its own Hubble moment.
Over the last decade, the EU has launched an unprecedented constellation of laws: GDPR, the AI Act, the Data Act, NIS2, the Cyber Resilience Act, DORA, DSA, DMA, eIDAS 2.0 and more. Together – under the ‘Digital Decade’ banner – they aim to form a powerful framework to protect fundamental rights, promote trustworthy technology and level the playing field. But in practice, businesses and even international governments are often looking at Europe through blurred regulatory optics: overlapping scopes, inconsistent definitions, parallel reporting channels, fragmented enforcement and difficult interfaces between regimes.
The Commission’s proposed Digital Omnibus is being positioned as the corrective lens.
Unveiled on 19 November 2025, following a call for evidence in September, the proposed package (in reality: two interlinked instruments) aims to “simplify, clarify and improve” the existing acquis rather than add a new layer. It focuses on data, privacy and AI (in one track) and targeted amendments to the AI Act (in another), with parallel workstreams on cybersecurity simplification. It aims to introduce some harmonisation across the regulatory landscape, including proposals such as introducing a single reporting point for all incidents, following the “report once, share many” principle.
The political question is obvious: will this be the mission that sharpens the image – making it easier to build, buy and use digital and AI services in Europe – or will it distort the optics further?
Below we walk through the key regimes in play – AI Act and AI rules; Data Act and data-sharing; GDPR and privacy; NIS2/CRA and cyber – setting out what the law currently does and what the Digital Omnibus proposals change.
- AI Act and AI-specific rules: from architecture to alignment
Where we are now
The AI Act, in force since August 2024 with obligations gradually being brought into application is built on a risk-based model:
- prohibited practices in narrow, high-risk areas;
- “high-risk” AI systems facing stringent ex ante duties (risk management, data governance, documentation, conformity assessment, post-market monitoring, registration); and
- transparency and limited obligations for other systems, with specific rules for general-purpose AI (“GPAI”) and systemic-risk models.
The model is ambitious but complex. Providers face simultaneous expectations under the AI Act, GDPR, product-safety rules (including, the upcoming CRA), financial or health regulations, the DSA (for systemic-risk platforms), and cyber regimes. Questions and criticisms, including those from international governments prioritising their own AI initiatives (such as the US), have surfaced about overlapping assessment processes, divergent national enforcement and the sheer administrative load, particularly for fast-iterating AI teams.
What the Digital Omnibus proposes to do
The proposals introduce a package of targeted AI Act amendments, not a rewrite, including:
- The burden of the AI Literacy requirement may no longer be an active obligation of providers and deployers. As a shift, under the proposals, the Commission and Member States would be responsible for encouraging organisations to provide a sufficient level of AI literacy, effectively reducing the legal incentive for mandatory upskilling for all AI tools. Of course, deployers of high-risk AI systems remain obliged to assign human oversight to staff with the necessary training and competence.
- A narrow exemption from EU database registration for AI systems that fall under Annex III (high-risk AI systems) but do not pose significant risk (i.e. are used only for internal, ancillary or procedural purposes) is proposed, trimming back what industry has labelled “paperwork without clear benefit”.
- Proportionality adjustments would expand reliefs for SMEs and “small mid-caps”, recognising that compliance infrastructures that are trivial for hyperscalers can be existential for growth-stage players.
- For systems already lawfully placed on the market, the applicability of the rules for high-risk AI Systems could be delayed for up to 16 months in the case of Annex III Systems and 24 months in the case of Annex I Systems. This change has been cited on the basis that the availability of standards, specifications and guidance has been delayed, creating challenges for operators looking to comply with the rules in time for the current deadline of 2 August 2026. Further, there is a phased implementation on content-labelling requirements for AI-generated content, so that those generating synthetic audio, image, video or text content should comply with Article 50(2) by 2 February 2027.
- The Commission will publish guidance on a post-market monitoring plan for high-risk AI systems rather than an implementing act, allowing organisations more flexibility to implement their own tailored post-market monitoring systems.
- Enforcement for some of the most sensitive AI Systems would be centralised through an enhanced EU AI Office, with stronger coordination and powers over AI systems which are developed by the same provider as the general-purpose AI model it was built on and AI Systems which are very large online platforms or search engines. The Commission would also have the power to carry out pre-market conformity assessments and testing of these AI systems before they could be launched in Europe. This is intended to reduce fragmentation and avoid 27 different readings of the same high-risk or GPAI obligations.
If implemented with precision, this “optical realignment” could make the AI Act more proportionate, streamlined, and amenable to businesses and governments seeking to innovate within the EU without abandoning its core commitments to safety and fundamental rights. If drawn too broadly or seen as a late-stage concession to large players, it risks being read as a retreat — and could undermine confidence in the stability of the regime.
On a final note, linked to the overlap with GDPR, the simplification proposals provide for an exemption for the residual processing of special categories of personal data for development and operation of an AI system or model, subject to certain conditions, including appropriate organisational and technical measures.
Data Act and data-sharing: merging lenses for a single view
Where we are now
The Data Act—together with the Open Data Directive, Data Governance Act (DGA) and sectoral reuse rules—was designed to unlock more value from data:
- giving users rights to access and share data generated by their connected products and services;
- mandating fairness and transparency in certain B2B data-sharing contracts;
- setting rules for data-sharing with public sector bodies in exceptional need; and
- imposing switching and interoperability duties for data processing services to reduce vendor lock-in.
These instruments collectively push towards data portability and reuse, but their cumulative effect is intricate. Providers face overlapping transparency and access obligations, uncertainty around trade secrets, different thresholds for public-sector reuse, and complex cloud switching provisions that interact uneasily with cybersecurity and confidentiality duties.
What the Digital Omnibus proposes to do
The Digital Omnibus proposals attempt to introduce some structural changes to the interplay with other digital laws but provide limited substantive changes to existing obligations.
Indicative elements include:
- Consolidating key public-sector data access and reuse rules, effectively aligning parts of the Open Data Directive and DGA with the Data Act framework to create a more coherent, single architecture.
- Recalibrating cloud switching by introducing narrow exemptions and tailored obligations for customizable cloud services and for SMEs/small mid-caps; and the introduction of model contractual terms and standard clauses for cloud contracts (also published on 19thNovember 2025).
- Clarifying existing safeguards and potentially tightening restrictions on data transfers to third countries, particularly where data-sharing obligations could expose EU data to third countries with weaker protections, including protections for trade secrets, allowing data holders to refuse sharing in the context of the mandatory IoT data-sharing provisions, where there is a substantial risk of unlawful disclosure to third countries.
- Shifting from a mandatory to a voluntary scheme for data intermediation services and abolishing certain obligations such as the requirement to offer such services through a separate legal person.
While some of these changes may be of practical benefit, e.g., fewer overlapping instruments to interpret; clearer conditions for lawful reuse of public-sector data; more predictable portability obligations; and some alignment between openness and confidentiality, many of the uncertainties and drafting ambiguities of the original text remain, with some changes likely to raise further questions of interpretation. There is also a risk that in re-balancing towards data access and industrial policy, the package shifts burdens disproportionately onto public entities or introduces carve-outs that only sophisticated players can navigate.
- GDPR and data protection: sharpening without slicing through the mirror
Where we are now
The GDPR is still the cornerstone of EU data protection: strict conditions for lawful processing, strong rights for individuals, a high bar for processing special categories of data, and serious penalties. It is promulgated by national supplementing data protection laws, guidance from the European Data Protection Board and local data protection supervisory authorities, and a growing body of case law.
The GDPR regime is demanding and firmly grounded in the protection of individuals, establishing strong societal safeguards. For many global businesses, achieving “GDPR-grade” compliance has become the baseline standard. However, applying the GDPR core principles together with other EU legislative requirements, in particular those of the ePrivacy Directive, is not yet fully clarified and can present serious challenges.
The proposed measures under the Digital Omnibus aim to provide greater clarity and promote a more practical interpretation of certain GDPR requirements, while fostering increased harmonization in their application across EU jurisdictions.
What the Digital Omnibus will do
Here the corrective-lens metaphor becomes more delicate.
One strand of the Digital Omnibus proposals aims to streamline compliance, reduce duplication, and focus regulatory attention on significant requirements —but some of the proposals have triggered strong pushback from privacy NGOs and commentators. Some changes which appeared in the initial leaked version but have not made their way into the final version include a narrowing of the scope of special category data and sandboxes to test specific technological solutions’ effects on processing. The final proposal, in comparison with the earlier version also broadens possibilities for scientific research, provides for more Commission guidance on things like pseudonymisation/re-identification and clarifies the position on cookie consent.
Key proposals include:
- Adjusting definitions and core concepts (for example, around pseudonymisation techniques rendering data non-personal, a limited derogation for biometric processing for certain types of identity verification, and the processing of sensitive data generally in the AI context – i.e., special category data under Article 9 GDPR) to create more room for compatible reuse.
Building on the CJEU ruling in Case C413/23 P (EDPS v SRB), the proposed changes introduce a more subjective and restrictive definition of personal data. They assess whether a specific stakeholder can reasonably re-identify an individual, without considering the capabilities of others in the processing chain. This shift could enable broader data reuse without triggering GDPR obligations and may even exclude pseudonymous data from its scope.
- More flexibility for AI.
- A broad set of AI-related exceptions – The proposal introduces GDPR exceptions for AI development and operations under the ‘legitimate interest’ basis, subject to safeguards such as data minimization, transparency, and a right to object. While this clarification is long overdue in order to create legal certainty for the use and training of AI, critics caution that although these safeguards may apply during training, the term ‘operations’ could encompass any personal data processing, making it difficult to apply legitimate interest consistently.
- The proposal also creates a limited exemption for sensitive data inadvertently present in AI datasets, allowing retention under protective measures when removal would require disproportionate effort.
- A broad set of AI-related exceptions – The proposal introduces GDPR exceptions for AI development and operations under the ‘legitimate interest’ basis, subject to safeguards such as data minimization, transparency, and a right to object. While this clarification is long overdue in order to create legal certainty for the use and training of AI, critics caution that although these safeguards may apply during training, the term ‘operations’ could encompass any personal data processing, making it difficult to apply legitimate interest consistently.
- Rethinking rules on tracking and terminal equipment, to address “consent fatigue” – If subscribers or users of terminal equipment are natural persons, instead of the ePrivacy Directive (and respective national implementing laws) only GDPR shall apply. The consent requirement remains, but in addition to the well-known exceptions under Article 5 (3) ePrivacy Directive (i.e., access to / storing of information necessary for telecommunication services or for the provision of a service requested by the subscriber/user), additional exceptions are introduced. The proposal would allow browsers or operating systems to transmit machine-readable signals for cookie preferences automatically. This would normally oblige websites to respect these signals and stop showing consent banners. The proposal thus promotes universal settings-based mechanisms for cookie management and seeks to create more consistent conditions for the processing of personal data collected via cookies and similar technologies. News and media organizations are explicitly exempted from this automated enforcement to support revenue models based on targeted advertising, while other sectors must comply with automated consent signals once standards are in place. However, details, in particular the relationship between the new provision and Article 6 and Article 9 GDPR are not fully clear, and the recitals do not provide sufficient context either.
- A new EU breach reporting model – A single EU breach reporting portal and common template would address the duplicative reporting and administrative burden resulting from overlapping obligations for organizations under the GDPR, NIS2, DORA and other frameworks. Based on a “report once, share many” principle, the centralized portal operated by ENISA, would use a harmonized incident reporting form. The threshold for notifying data protection authorities would be raised to cover only breaches posing a high risk to individuals (a list of examples to be provided by the EDPB), and the reporting deadline extended from 72 to 96 hours.
- Clarifying that automated individual decisions can be necessary for entering into, or the performance of, a contract regardless of whether the decision could be taken otherwise than by solely automated means (Article 22 (2) (a) GDPR). This means controllers could rely on contractual performance more broadly, even when automation is not indispensable.
- Extending the exemption for rejecting data subject access requests when they are excessive or manifestly unfounded requests to specifically now include where access requests are abused for purposes other than the protection of data (potentially, for example, when access rights are used for civil litigation). While the definition of ‘abuse’ will require interpretation, this could provide relief particularly for controllers dealing with comprehensive Article 15 GDPR data subject access requests in contentious employment situations. While critics argue this could significantly reduce individuals’ ability to exercise rights in practice, the burden of proof that a request is excessive or unfounded remains with the data controller (e.g., employer), which means that the practical impact of this exemption may be not as broad as some controllers would have wished.
- A new exemption from GDPR obligations of transparency under Article 13 – The obligation to provide a privacy notice would be removed, when there is a clear relationship between the parties, the activities are not data-intensive and there are reasonable grounds to believe the individual already has the information. However, this exemption would not apply if the data is shared with other recipients, transferred to a third country, used for automated decision-making, or processed in a way likely to pose a high risk to the data subject’s rights. Scientific research purposes also benefits from this exemption, with fewer additional caveats.
- A harmonization of Data Protection Impact Assessment (DPIA) requirements within EU – EU-wide lists of processing requiring and not requiring a DPIA would replace national lists, along with a DPIA common template and methodology. These lists would be prepared by the European Data Protection Board (EDPB, reducing fragmentation across Member States).
The data protection proposals signal a welcome effort to reduce compliance burdens and introduce mechanisms—such as the unified breach portal—that could foster innovation and operational efficiency. These measures respond to long-standing calls from businesses, for clearer, workable privacy rules in an AI-driven economy. However, the proposals reveal a patchwork of guidance, in particular on AI related exceptions, the narrowed definitions of “personal data” and “sensitive data” and vague exemptions from transparency obligations risk creating ambiguity rather than certainty. Privacy campaigners have already been quick to criticise the proposals and the “death by a thousand cuts” approach, arguing that the proposals are a secret “fast-track” attack on the GDPR…that jumps several elements of the process, including impact assessments and making time for feedback by legal services and relevant units in the EU institutions”. If simplification is perceived as weakening fundamental rights the result could be legal instability, litigation and political challenge – the opposite of the predictability the market wants. The balance the Omnibus needs to strike is therefore extremely fine to ensure that simplification does not come at the expense of trust and fundamental rights: codify realistic, workable rules for AI-era data use without cracking the mirror that makes EU privacy law trusted worldwide.
- NIS2, the Cyber Resilience Act and cyber rules: aligning the alert systems
Where we are now
The NIS2 Directive significantly expanded the scope of EU cybersecurity law. It imposes risk-management measures and strict incident reporting on a broad range of “essential” and “important” entities across sectors such as energy, health, transport, digital infrastructure and managed services.
Parallel to NIS2, the Cyber Resilience Act (CRA) introduces horizontal cybersecurity requirements for “products with digital elements”, tied to CE-marking and lifecycle security obligations. Together, they aim to embed security-by-design into both operations and products.
The difficulty is that major incidents frequently engage multiple regimes simultaneously: NIS2, GDPR, sector-specific rules, CRA (where products are involved), sometimes DORA or other frameworks. Reporting triggers and timelines are similar but not identical; authorities differ; the approach of Member States vary (especially for NIS2); and terminology is not fully aligned. The result is duplicated notifications, internal confusion and, in cross-border contexts, inconsistent supervisory expectations. What’s more, language in NIS2 around reporting those incidents “capable of” causing severe operational disruption, financial loss or damage to legal/ natural persons has added further uncertainty to when the triggers for reporting an incident are reached.
What the Digital Omnibus proposes to do
The Omnibus proposals support the Commission’s wider simplification agenda for cybersecurity, including:
- Harmonising incident notification through the creation of a single-entry point for incident reporting developed and supervised by the European Union Agency for Cybersecurity (ENISA). Developing the specifications in cooperation with the Commission, the CSIRTs network and competent authorities under the different Union acts. The single-entry point will be developed by building on the experience gained from the CRA’s single reporting platform. The single-entry point will be used (i) for notifications of severe incidents; (ii) for ensuring that severe incidents are reported only once – whether it be either under NIS or the CRA, the GDPR or DORA – and (iii) on a voluntary basis, for notifications by different entities.
- Stating the initiative to publish guidelines on the AI Act’s interplay with other Union legislations, such as the CRA – with the purpose to clarify the relationship between CRA obligations and sectoral / AI rules, including when security requirements are satisfied by compliance with harmonised standards or sector frameworks -.
- Considering adjustments to the scope and functioning of regulatory sandboxes and real-world testing provisions for high-risk AI systems and security-critical products, to avoid parallel but uncoordinated experimental regimes.
For operators of critical services, software publishers and connected device manufacturers, this is potentially a very welcome rationalisation. It will provide particular relief for organisations in scope of NIS2 who already face the burden of keeping track of deviating approaches to incident notifications resulting from the diverse national approaches to the implementation of the Directive into local law. A single, well-aligned alert system is easier to operate than four overlapping ones. But, again, the fine print will decide whether this becomes genuine simplification or just a more ornate control panel. It will also be interesting to see how the new single-entry point for incident reporting will align with and potentially replace national platforms and what this might mean, practically speaking, for ensuing sufficient resource at an ENISA level to meet an ever-increasing flow of incident notifications. It will also be key that ENISA themselves employ such cybersecurity measures to ensure that the single platform, itself, does not become the target of cyber threat actors.
- Can the corrective lens do its job?
The underlying story here is not one of businesses versus regulation.
Most serious technology businesses operating in or from the EU want clear, high-quality rules. They recognise that:
- strong privacy and security frameworks underpin user trust;
- clear AI and data standards reduce ethical and legal ambiguity; and
- predictable regulation de-risks investment and enables cross-border scaling.
The goal the EU set itself—to define a trustworthy, rights-respecting, innovation-enabling digital order—is a good and necessary one.
But clarity is now the critical missing piece.
If the Digital Omnibus:
- removes genuine overlaps rather than cosmetically renumbering them;
- offers precise, narrow and transparent adjustments to accommodate AI-era realities;
- aligns reporting, supervision and terminology across regimes; and
- is debated through a process that maintains confidence in fundamental-rights protections,
then it can function as the Hubble corrective lens: not changing the mission but finally allowing everyone to see it in focus.
If, instead, it blurs core protections, privileges only those with the largest regulatory teams, or becomes a shorthand vehicle for controversial policy shifts without proper scrutiny, it risks leaving both regulators and businesses with a less stable picture than before and an opportunity for international governments to seek further concessions to compliance requirements during trade negotiations where the EU’s rules conflict with their own innovation initiatives.
The Digital Omnibus proposals now face a lengthy journey as they move into trilogue negotiations with the European Parliament and the Council of the European Union. For clients, both inside and outside of the EU, the immediate actions are pragmatic ones: map where today’s rules overlap and bite hardest; stress-test governance frameworks against the directions signalled in the drafts; and be ready to engage constructively as the proposals move from leak to law. The companies that treat this as an opportunity to simplify internally while Europe simplifies externally will be best placed to thrive. This is particularly so for clients spanning across jurisdictions that must consider additional, and often targeted, compliance requirements (e.g., organizations developing and deploying AI within US states, as well as the EU, that must consider multiple AI regulatory frameworks within their compliance journey).
The EU’s telescopes are already in orbit. The Digital Omnibus is the planned servicing mission. Over the coming months we will discover whether Europe has chosen the right corrective optics—and with them, whether the EU can offer what the market most wants: a demanding, values-driven digital regime that is also legible, navigable and worth building for.
- Find out more
DLA Piper’s team of AI lawyers, data scientists, and policy advisers helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. The firm continuously monitors updates and developments arising in AI and their impact on industry across the world.
For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI strategy through DLA Piper’s AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.
