Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

One Compliance Program for Two Frameworks: Aligning the EU AI Act and GDPR for Efficiency

By John Tomaszewski & Yana Komsitsky on April 15, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

As another piece of harmonization legislation, the AI Act is unsurprisingly reminiscent in regulatory philosophy to the GDPR. Many of the same data principles (transparency, accuracy, security) are present, as is an explicit risk-based approach. Understanding precisely where there is overlap with your existing GDPR program is a head start in your AI Act compliance program design. But it is also important to recognize where the two frameworks diverge. The GDPR regulates what happens to personal data, the legal basis for collection, how it is used, how long it is kept, who can access it. The AI Act generally regulates the AI system itself – namely, how it is designed, tested, documented, governed, and deployed. While that difference in regulatory object creates structural differences in inputs and outputs, the framework itself does have a lot of commonalities.

This post suggests a strategy for efficiently building a unified compliance framework for both regimes.

How can you leverage the overlap between the two regimes?

  • Integrated governance committee. A single cross-functional body (legal, compliance, engineering, procurement, and business units) can own both GDPR and AI Act risk, provided its terms of reference explicitly address both regulatory objects. Some organizations expand the DPO role with additional expertise, while others have to create a separate AI compliance function. To cover both, the DPO role should have a baseline technical understanding of AI systems, risk assessment methodologies, and technical safety requirements.
  • Consolidated risk assessment process workflows. Run risk assessment triage, DPIA, and AI fundamental rights impact assessment (FRIA) exercises from a single intake workflow, separating into parallel workstreams only where the regulatory requirements genuinely diverge. This avoids duplicative stakeholder interviews and shortens overall cycle time.
  • Vendor and supply chain management. A unified vendor questionnaire covering data processing agreements and AI Act contractual provisions can be significantly more efficient than running two procurement tracks. Both regimes require appropriate safeguards in relationships with suppliers and other parties. Notably, Article 47 of the AI Act requires providers of high-risk AI systems to include a statement of GDPR compliance in their declaration of conformity where the system processes personal data.
  • Unified training curriculum. AI literacy (mandatory under the AI Act from February 2025) and GDPR awareness training share a natural home in the same learning program. In both subjects, legal and compliance staff, technical developers, and operational deployers need different depths of learning.

How does regulatory philosophy support integration into one compliance program?

Risk-based architecture. Both the GDPR and the AI Act employ a risk-based approach to compliance. The GDPR calibrates obligations to the severity of risks to data subjects’ rights and freedoms. The AI Act classifies AI systems into four risk categories ranging from minimal-risk to prohibited, then tiering obligations accordingly.

Accountability and documentation. The accountability principle is the spine of both regimes.  To demonstrate accountability, the AI Act requires much more elaborate documentation of development and design choices for high-risk AI systems and GPAI models than the GDPR’s DPIAs and processing records. But policy libraries, audit trails, and governance procedures built for GDPR provide a credible scaffold for AI Act documentation.

Impact assessments. GDPR Article 35 requires DPIAs for high-risk personal data processing. The AI Act Article 9 requires a risk management system for high-risk AI systems. Although the AI Act explicitly states that AI fundamental rights assessments can build on GDPR DPIAs to avoid duplication, this is not the same document. But a well-structured DPIA will capture some of the same risk categories as an AI Act risk management assessment.

Transparency and human oversight. GDPR Article 22 gives individuals the right not to be subject to solely automated decisions with significant effects and Article 14 of the AI Act requires that high-risk AI systems are designed to enable effective human oversight. This is the same practical outcome.

Supervisory authority overlap. In many Member States, the Data Protection Authority will also be the AI Act market surveillance authority — meaning a single regulator may scrutinize both your GDPR and AI Act posture in the same inspection.

Don’t forget to mind the gaps, however.

Product safety logic vs. data protection logic. The AI Act represents a fundamental shift from merely outcomes-based (i.e. GDPR) to both outcomes-based and “go to market” requirements. Unlike GDPR’s blanket compliance approach, the AI Act requires pre-market approval for high-risk AI systems. Similarly, the conformity declaration under Article 47 has no GDPR equivalent. Lifecycle obligations differ fundamentally: GDPR compliance is continuous and data-flow-oriented; AI Act compliance has hard pre-deployment gates and mandatory post-market monitoring obligations that track the model itself, not the data it touches.

Scope of application. The GDPR is triggered by the processing of personal data, full stop. The AI Act applies to AI systems irrespective of whether personal data is involved. Many AI deployments (e.g. computer vision for infrastructure, predictive maintenance, materials optimization) fall entirely outside the scope of GDPR but within the scope of the AI Act. Your GDPR program simply has nothing to say about them.

Risk taxonomy mismatch. There is a fundamental difference between the two regimes in terms of the stage at which risk is addressed as well as the types of risk in play. In general, the GDPR provides for a broader range of discretion in weighing and balancing interests. The AI Act’s risk classification is categorical and pre-determined by Annex III and Article 6; a system is either high-risk or it is not. GDPR risk is a spectrum assessed contextually for each processing activity. Further, AI-based risks may not be related to a secondary use (or misuse) of data, but more toward to an adverse action being taken based on incorrect data. Mapping one taxonomy onto the other produces analytical distortions.

Technical documentation obligations. Unfortunately, GDPR compliance does not give you the technical documentation required under AI Act Article 11. The requirement to document system architecture, training methodology, and performance benchmarks is unique to the AI Act. There is no GDPR analogue, and no amount of Records of Processing Activities drafting fills that gap.

Separate enforcement architecture. A data breach caused by an AI system malfunction may require dual reporting to separate authorities. The AI Act requires market surveillance authorities to consult data protection authorities when enforcement concerns both AI and personal data issues — but the enforcement tracks remain legally distinct, and penalties under each regime can in principle accumulate.

Key Takeaways

Bottom line is: treat the two programs as siblings, not twins. Build a shared governance layer — committee structure, vendor management, training, and impact assessment intake — that serves both regimes simultaneously. The AI Act constitutes a complementary, yet stricter, regulatory layer for AI-driven data processing; requiring joint interpretation with the GDPR to ensure coherent application. Still, one needs to resist the temptation to collapse the two into a single compliance artefact. Maintain distinct registers: your RoPA alongside your AI system inventory; your DPIAs alongside your AI Act Article 9 risk management files and Article 11 technical documentation packages.

Assign clear ownership at the intersection: who is responsible when a high-risk AI system processes special category data? That intersection – where GDPR’s most stringent provisions meet the AI Act’s highest-obligation tier – is where regulatory exposure is greatest and where integrated governance makes the most sense.

Finally, keep one eye on the EU’s Digital Omnibus proposals: the proposal amends the AI Act by clarifying compliance obligations, streamlining conformity assessment procedures, and updating requirements for high-risk AI systems. These changes that may alter the calibration between the two frameworks. The architecture you build now should be designed to absorb that evolution.

Photo of John Tomaszewski John Tomaszewski

John Tomaszewski specializes in emerging technology and its application to business. His primary focus has been developing trust models to enable new and disruptive technologies and businesses to thrive. In the “Information Age”, management needs to have good advice and counsel on how…

John Tomaszewski specializes in emerging technology and its application to business. His primary focus has been developing trust models to enable new and disruptive technologies and businesses to thrive. In the “Information Age”, management needs to have good advice and counsel on how to protect the capital asset which heretofore has been left to the IT specialists – its data.

John’s expertise in the understanding of a company’s data protection and management needs provide a specialized point of view which allows for holistic solutions. A good answer should always solve at least three problems.

John has been a co-author of several information security and privacy publications, including the PKI Assessment Guidelines and Privacy, Security and Information Management: An Overview; as well as publishing scholarly works of his own on the topic. He has also provided input to the drafting of various security and privacy laws around the world; including the APEC Cross-Border Privacy Rules system. He is a frequent speaker globally on the topics of cloud computing, Self Regulatory Organizations (“SROs”), cross-border privacy schemes, and secure e-commerce.

Read more about John TomaszewskiJohn's Linkedin ProfileJohn's Twitter ProfileJohn's Facebook Profile
Show more Show less
Photo of Yana Komsitsky Yana Komsitsky
Read more about Yana Komsitsky
  • Posted in:
    Privacy & Data Security
  • Blog:
    The Global Privacy Watch
  • Organization:
    Seyfarth Shaw LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo