Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Virtual and Digital Health Digest – February 2026

By Alexander Roussanov, Fabien Roy, Dr Beatriz San Martin, Eleri Abreo, Ana Gonzalez-Lamuno, Shama Aktar & Emma on March 2, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

January 2026 saw significant activity as UK and EU authorities advanced major initiatives affecting the use of AI, digital technologies, data governance, and cybersecurity in healthcare and life sciences. Notable developments include EMA’s and FDA joint principles on the use of AI across the medicinal product lifecycle, the European Commission’s call for evidence on the proposed amendments to the Medical Devices Regulation (EU) 2017/745 (MDR) and In Vitro Diagnostic Regulation (EU) 2017/746 (IVDR), proposals to strengthen the EU Cybersecurity Act, and important data protection interventions. In parallel, UK and EU regulators continued to focus on the safe deployment of digital tools in healthcare, including new Medicines and Healthcare products Regulatory Agency (MHRA) guidance on mental health technologies and ongoing work to refine AI governance. These updates, alongside developments in Intellectual Property (IP) and product liability, signal a rapidly evolving regulatory environment that will help to shape digital innovation and compliance expectations throughout 2026.

Regulatory Updates

  • EMA and FDA issue joint principles on AI in the medicines lifecycle. The joint principles are intended to promote the use of AI while ensuring the safe, responsible, and reliable application throughout the lifecycle of medicines, providing broad guidance to cover all phases of a medicine, from early research and clinical trials through to manufacturing and post-market safety monitoring. The ten guiding principles set out regulatory expectations for the development and use of AI systems. Some examples include: implementing robust data governance and privacy protections, and providing clear, accessible information on intended use, performance, limitations, data sources, update processes, and explainability. Although not legally binding, the principles provide helpful insight into the aligned regulatory expectations of the EMA and FDA and are expected to inform future EU-level and national regulatory guidance, to advance good practice in medicines development.
  • European Commission Publishes an Open Call for Evidence on the Revisions to the EU MDR and IVDR. Following the publication of the European Commission’s proposals to amend the MDR and the IVDR (the Proposals) in December 2025 (see our January 2026 Digest), the Commission is now seeking views, through its call for evidence, on whether the Proposals adequately address implementation challenges, including by reducing administrative burdens, enhancing predictability and alignment across EU Member States in the certification process, and ensuring that regulatory requirements are proportionate and aligned with other relevant EU legislation. The feedback received will inform discussions within the European Parliament and the Council of the European Union during the negotiations on the Proposals. The call for evidence is open until March 23, 2026. See our recent Advisory for an overview of what international companies should be preparing for and understand the impact on AI-based software.
  • European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) Issue Joint Opinion on the European Commission’s Proposal to Amend the AI Act (Digital Omnibus on AI). The joint opinion was adopted following a formal consultation by the Commission on its proposal for a Digital Omnibus on AI (See our December 2025 Digest). While welcoming the efforts to reduce administrative burdens, the opinion stresses that simplification should not undermine fundamental rights or the accountability of AI system providers. In particular, the EDPB and EDPS caution against removing registration obligations for AI systems listed in Annex III of the AI Act when providers classify them as “non-high risk,” noting that this could weaken accountability and regulatory oversight. They also support the creation of EU-level AI regulatory sandboxes to promote innovation and help small and medium-sized enterprises, provided that Data Protection Authorities are involved in supervising the data processing activities. The EDPB and EDPS have indicated that joint guidelines on the interaction between the EU General Data Protection Regulation (EU GDPR) and the AI Act are under development and expected later in 2026.
  • Publication of Impact Statement for 10 Year Health Plan for England. Building upon the announcement of the Government’s 10 Year Health Plan in July 2025 (as described in our blog), this impact statement now explains the rationale and potential effects of digital transformation of the National Health Service (NHS). It highlights the need for digitally enabled care pathways, improved data sharing, and expanded use of digital tools. This will enhance system efficiencies, patient empowerment, and financial sustainability of the NHS. For example, the adoption of AI technologies is expected to result in operational efficiencies (e.g., reduction in reporting times and triage times), improve data quality (through standardization of reporting), and improve health outcomes (e.g., earlier diagnoses). This is particularly true as patients become increasingly accustomed to using technology to self-manage their health. Many of the digital reforms will be designed and implemented locally, meaning their full impacts will evolve over time.
  • MHRA publishes new guidance on the use of mental health apps and technologies. The MHRA has published new guidance to promote the safe and effective use of digital mental health technologies and strengthen the regulatory framework governing them. The guidance outlines five key areas to consider before using the tools, including: what the technology claims to do, who the intended audience is, the available evidence supporting its use, how personal data is collected and used, and whether it is regulated as a medical device. For those regulated as devices (for example, those that claim to diagnose, treat, or manage a mental health condition), the public can check that the technology has the appropriate marking and has therefore met UK safety standards. A package of new online resources has also been made available, consisting of animations and real-world examples of safe, well-evidenced digital mental health technologies in practice. These resources have been tailored for the general public and healthcare professionals.

Privacy Updates

  • European Commission Publishes Proposal to Revise and Replace the EU Cybersecurity Regulation 2019/881. The proposal forms part of a broader EU cybersecurity package aimed at strengthening resilience, aligns with previous plans to strengthen cybersecurity in the health sector (see also our May 2025 Digest), and is linked to the Commission’s proposal to amend Directive 2022/2555 (also known as NIS2). Some measures of the proposal include (i) establishing an EU-level framework for information and communications technology (ICT) supply chain security across NIS2 sectors, including the health sector. Under that framework, the Commission could restrict or require mitigation measures for the use of ICT components from designated non-EU high-risk suppliers in certain identified key ICT assets; (ii) expanding the EU cybersecurity certification framework, including by allowing certification to cover a company’s overall cybersecurity posture; and (iii) expanding the mandate of the European Union Cybersecurity Agency, in areas such as risk assessments, incident response, and certification. The proposal will now be reviewed by the European Parliament and the Council of the European Union.
  • Information Commissioner’s Office (ICO) publishes updated guidance on international data transfers. On January 15, 2026, the ICO issued updated and simplified guidance on international data transfers to assist businesses with compliance with the UK GDPR. The guidance includes a three-part test for businesses to identify if they are making restricted transfers: (i) confirm the UK GDPR applies to the data, (ii) determine whether the transfer is to a country outside the UK, and (iii) check whether the recipient is a separate legal entity. If all three conditions are met, organizations must comply with the UK GDPR transfer regime, which may include using adequacy decisions, appropriate safeguards, or specific derogations. The guidance includes additional information on multi-layered transfers, the roles and responsibilities for controllers and processors, and a set of FAQs. The ICO has also indicated that it intends to revisit its guidance on transfer risk assessments, and its International Data Transfer Agreement and cloud services.
  • ICO publishes Tech Futures report on agentic AI. The ICO explains that emerging agentic AI systems – AI tools that can autonomously plan and act – pose novel data protection risks beyond those seen in standard generative AI. For digital health, these risks are highly relevant because agentic systems may inadvertently process special category data (including health data), scale automated decision‑making, and create complex controller/processor chains. Furthermore, the purposes for processing personal information may be set too broadly, exceeding what is necessary to achieve the aim. The ICO stresses that autonomy in AI does not absolve organizations of their accountability for responsible deployment.

IP Updates

  • Digital Europe publishes policy paper on EHDS implementation and IP protection. Digital Europe, together with the European Federation of Pharmaceutical Industries and Associations, the European Confederation of Pharmaceutical Entrepreneurs, the European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry, and MedTech Europe, have published a joint policy paper setting out recommendations on the implementation of the European Health Data Space (EHDS) to protect intellectual property, trade secrets and commercially confidential information while enabling the secondary use of health data for research, innovation and public health. The paper states that although the EHDS offers significant opportunities for data-driven innovation and improved patient outcomes, its success depends on a governance framework that balances data accessibility with the protection of proprietary information that underpins investment and innovation. It highlights that the EHDS extends data-sharing obligations to privately held and pre-commercial datasets and warns that, in the absence of implementing acts under Article 52, divergent national approaches risk fragmentation and a loss of trust among data holders.
  • Digital Europe Policy Paper Calling for Stronger Safeguards in EHDS. On January 15, 2026, Digital Europe published a policy paper with its recommendations for implementing the EHDS. While recognizing the EHDS’ potential to support research, innovation, and improved patient outcomes, the paper stresses that its success depends on a robust governance framework that safeguards intellectual property, commercially confidential information, and trade secrets.The paper highlights that the EHDS may require access to privately held and pre commercial datasets, including early stage Research & Development, clinical trial, and device-generated data, with associated risks if proprietary information is not adequately safeguarded. It warns that inconsistent interpretation of Article 52 of the EHDS Regulation by Member States may cause fragmentation, legal uncertainty, and reduced trust among data holders.Key recommendations for consistent EU wide implementation guidelines include establishing specialized IP and trade secret task forces within Health Data Access Bodies to support classification of datasets and metadata by confidentiality level, and to promote structured cooperation between authorities, data holders, and rights holders.For healthcare companies, the EHDS presents both a significant opportunity and material compliance challenges. Industry confidence will depend on harmonized safeguards that protect sensitive information while enabling responsible secondary use of health data.
  • AI-Related Patent Filings Quadruple in a Decade. On January 28, 2026, the UK government’s AI Skills for Life and Work: Patent Analysis reported that AI-related patents grew sharply from 5.2% in 2014 to 20.3% in 2023, reinforcing the rapid pace of AI innovation and adoption. Notably, the dominant technologies remain algorithms, artificial intelligence, neural networks, and machine learning, while technologies such as deep learning and generative adversarial networks are growing quickly in prominence.The analysis also shows that patents now draw on a wider range of AI technologies, increasing from an average of around two AI related concepts per patent in 2014 to more than three and a half by 2023. These technologies cluster into distinct “knowledge packages,” some focused on developing AI itself and others on applying AI in areas such as healthcare, chemistry, and medical technology. This highlights the rising need to combine AI expertise with sector-specific knowledge.For healthcare companies, these findings suggest that AI will play an increasingly significant role across research, development, and clinical workflows. Life sciences corporations will likely need to prioritize cross-disciplinary talent, data capabilities, and robust IP strategies reflecting the shift from AI being optional to becoming a central driver of healthcare innovation.
  • UK Court of Appeal Reinstates Abbott’s Patent Emphasizing Importance of Consistent Claim Construction. In the July 2024 digest, we reported on the UK High Court’s decision that an Abbott patent relating to continuous glucose monitoring technology was invalid for obviousness following a challenge by Dexcom, as part of a broader global dispute between the parties. On December 18, 2025, the UK Court of Appeal overturned that decision and reinstated Abbott’s patent.By the time of the appeal, Abbott accepted the first instance judge’s narrow construction of claim 1, which required (among other features) that the introducer needle be coupled to the device housing and manually inserted. However, when assessing obviousness, the judge had relied on prior art systems involving automatic insertion of an integrated sensor and sensor electronics as satisfying key integers of claim 1. Abbott successfully argued in the appeal that this amounted to applying a different interpretation to claim 1 for the purposes of obviousness and that this was flatly inconsistent with the judge’s construction of claim 1. The Court of Appeal agreed, holding that obviousness must be assessed by reference to the claim as properly construed, and not by reference to a system that falls outside that construction. As there was no evidential basis to support a finding of obviousness on the accepted narrow construction, the appeal was allowed.

Product Liability Updates

Draft statement on liability for AI harms from the UK Jurisdiction Taskforce. The UK Jurisdiction Taskforce (UKJT) of Lawtech UK launched a public consultation on its draft legal statement addressing liability for AI harms under English law. Whilst the lack of an AI-specific liability regime in the UK gives a perception of legal uncertainty, the statement explains that England’s common law system already provides a flexible framework for addressing the majority of potential physical or economic harm caused by AI. It emphasizes that AI itself cannot bear legal responsibility, so liability must be attributed to developers, users, and other human or corporate actors through established principles such as duty of care, foreseeability, and contractual allocation of risk. The statement also addressed whether vicarious liability applies to loss caused by AI, whether a professional can be liable for using or failing to use AI in the provision of their services, and whether liability attaches to false statements made by an AI chatbot. The UKJT has requested feedback on the draft statement before publication in final form.

Photo of Alexander Roussanov Alexander Roussanov
Read more about Alexander Roussanov
  • Posted in:
    Health Care
  • Blog:
    BioSlice Blog
  • Organization:
    Arnold & Porter Kaye Scholer LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo