Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Managing AI-related cyber risks

By Simon Lovegrove (UK) & Charlotte Carnegie on October 6, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On 6 October 2025, HM Treasury (HMT) issued, G7 cyber expert group statement on Artificial Intelligence and Cybersecurity.

Rather than set guidance or regulatory expectations, the statement seeks to raise awareness of artificial intelligence’s (AI) cybersecurity dimensions and outlines key considerations for financial institutions, regulatory authorities, and other stakeholders that support security and resilience in the financial sector. The statement should be read in conjunction with the G7’s Fundamental Elements series, which serve to guide internal and external discussions on cybersecurity risk management decisions critical to cybersecurity, promoting conversations across jurisdictions and sectors to drive effective cyber risk management practices.

The statement contains key considerations for financial institutions. To manage AI-related cyber risks, financial institutions may consider the following questions:

  • Strategy, Governance, and Oversight: Are governance frameworks responsive to emerging AI risks?
  • Cybersecurity Integration: Are AI systems aligned with secure-by-design principles?
  • Data Security and Lineage: Are data sources vetted and is lineage tracked?
  • Logging and Monitoring: Are anomalies and edge-cases logged and reviewed?
  • Identity and Authentication: Are systems resilient against impersonation and AI-enabled fraud?
  • Incident Response: Are incident response plans and playbooks updated to account for AI-enhanced attacks and AI-specific incidents?
  • Resources, Skills, and Awareness: What is the path to ensure adequate expertise to evaluate and monitor AI use?

Financial sector stakeholders are also encouraged to:

  • Explore AI’s potential for enhancing cyber defence capabilities.
  • Update risk frameworks to reflect AI-specific cybersecurity vulnerabilities and mitigation strategies.
  • Engage in collaborative research and policy development with technology firms and academia.
  • Promote public-private dialogue to promote secure and trustworthy AI in the financial sector.
Photo of Simon Lovegrove (UK) Simon Lovegrove (UK)
Read more about Simon Lovegrove (UK)
  • Posted in:
    Financial, International
  • Blog:
    Financial services: Regulation tomorrow
  • Organization:
    Norton Rose Fulbright
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo