On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

The Report does not make specific policy recommendations; instead, it synthesizes scientific evidence to provide an evidence base for decision-makers. This blog summarizes the Report’s key findings across its three central questions: (i) what can GPAI do today, and how might its capabilities change? (ii) what emerging risks does it pose? And (iii) what risk management approaches exist?

Davos, Switzerland: Axios House

Of all the strategic moves in the tech world, Meta’s appointment of Dina Powell McCormack as President and Vice Chairman is one of the most telling. From a technologist’s perspective, her session at Davos felt less like a product roadmap and more like the unveiling of a new, and arguably more

Editor’s Note: The legal technology and cybersecurity landscape in 2026 is defined by a paradox of increasing connectivity and deepening regulatory fragmentation. As the European Union advances new requirements under the AI Act and the Data Act on a staggered, sector‑specific timetable, organizations face a daunting task: harmonizing global operations with localized, stringent compliance mandates.

Calling it “the industry’s first scaled agentic AI tool for fact investigation and e-discovery,” DISCO today announced an agentic AI enhancement to its Cecilia Q&A tool, which the company says is designed to handle large-scale e-discovery matters with millions of documents and terabytes of data. The Austin-based legal technology company’s new tool adds what it describes as