Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

On the Eighth Day of Data… AI Regulation – A 2025 Recap and a Look Ahead to 2026

By Catherine Keeling & Mo Gillani on December 24, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
day8

In 1950, reflecting on the future of machine intelligence, Alan Turing observed: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” With several large language models, most notably OpenAI’s GPT-4.5, passing the Turing Test in 2025, some governments have taken steps towards stricter regulation this year, with others still working to determine what “needs to be done” for AI regulation in the year ahead.

Most notably, this year saw key provisions of the EU AI Act—the world’s first comprehensive AI-dedicated law—take effect. However, instead of seeing the “Brussels effect” with AI regulation, going into 2026, the global approach appears to be leaning towards that of the UK and U.S., which have led the charge for a looser regulatory environment in recent years.

The EU position

Considering the position in the EU first, on August 2, 2025, the AI Act’s rules on general-purpose AI (“GPAI”) become applicable. The rules require providers of GPAI systems—models capable of a wide range of applications, including direct use and downstream integration (e.g., GPT‑5)—to comply with transparency and copyright obligations when placing their models on the EU market.

Looking Ahead to 2026

Exactly a year later, on August 2, 2026, the bulk of the remaining AI Act provisions are scheduled to enter into force, primarily targeting high-risk AI systems used in areas such as critical infrastructure, employment, and credit scoring. Key provisions include the risk management obligations throughout the AI system lifecycle, and the requirement of human oversight in high-risk AI systems, in order to prevent or minimise risks to safety or fundamental rights.

However, as countries compete to expand their AI capabilities, the year ahead seems a race to win the competitive edge in business-friendly regulation.

EU Digital Omnibus Proposal

The EU’s recently published Digital Omnibus Proposal (“Omnibus”) signalled a potential change in strategy for the EU, setting out intentions to streamline and make simplifications to a raft of existing digital and data legislation. While the Omnibus would not make substantive changes to the AI Act, it could provide a prolonged transition period for businesses by delaying the applicability of key provisions. The AI Act’s compliance deadlines for high-risk AI systems are proposed to be deferred, with compliance tied to the release of harmonised standards or guidance. Similarly, transparency obligations for AI systems placed on the EU market before August 2, 2026 would benefit from an extended period to implement the AI Act’s transparency obligations. 

As a current proposal, the Omnibus’ provisions do not represent definitive change at this stage, and the proposals remain subject to debate. It remains to be seen whether a final Omnibus will be agreed prior to the key August 2026 date for the AI Act. As a result, businesses could face additional compliance hurdles amid a fragmented and uncertain regulatory landscape in the upcoming year.

The UK position

Meanwhile, the UK government’s proposal to lay a regulatory foundation for AI has been postponed—again. The planned “AI Bill”, initially expected in the first months of the incumbent Labour government, has now been delayed until May 2026 at the earliest. Details remain limited, but the Bill is expected to operate as a framework and represents a less stringent and comprehensive approach when compared with the EU AI Act.

As regulatory regimes converge globally, the UK’s repeated delays signal a desire to follow the U.S.’s low-regulation, pro-innovation approach. By keeping AI regulation off a legislative footing, the UK aims to provide technology companies with the freedom to innovate and develop in the market, without the threat of stringent regulation looming in 2026.

The U.S. position

In the U.S., a similar approach is being taken, as the White House demonstrates intent to slim down its digital legislative framework. The White House’s AI Action Plan and new Executive Order aim to ease federal oversight and establish a “minimally burdensome” national framework on AI policy. This approach follows big tech urging the White House to reassert federal control over AI regulation, warning that state-level rules risk slowing innovation. However, with the scope of these plans still unclear and legal challenges unresolved, state laws are likely to endure, leaving U.S. companies to juggle federal guidance alongside a varied set of state requirements.

The beginning of 2025 brought with it a horizon of stringent AI regulation across the globe, with the EU leading the charge. As the year comes to an end, this outlook has shifted, with global leaders attempting to simplify and minimise AI regulation. Looking to the year ahead, proposals such as the Omnibus leave plenty of ambiguity in this area. In 2026, companies will need to stay agile, tracking evolving rules and preparing for a regulatory landscape that remains unpredictable.

Photo of Catherine Keeling Catherine Keeling
Read more about Catherine Keeling
Photo of Mo Gillani Mo Gillani
Read more about Mo Gillani
  • Posted in:
    Privacy & Data Security
  • Blog:
    RopesDataPhiles
  • Organization:
    Ropes & Gray
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo