On November 12, 2025, UNESCO’S General Conference adopted its Recommendation on the Ethics of Neurotechnology (“the Recommendation”)–the first attempt at establishing a global legal framework for the ethical development and use of neurotechnology. The Recommendation aims to set out a comprehensive rights-based framework for the entire life cycle of neurotechnology, from the design of neurotechnology products and services to their disposal.
While not legally-binding, the Recommendation states that its provisions should be considered by, among others, UNESCO Member States, research organizations, and private companies involved in neurotechnology, and that they establish how best to honor fundamental human rights in the development, deployment and disposal of this technology. It is therefore possible that in the future, they may be a starting point for binding legislation, or could be used as persuasive authority to support enforcement actions arising under existing legislation protecting fundamental human rights, e.g., the GDPR and other privacy laws around the world. In that regard, it is notable that the EU AI Act was inspired, at least in part, on UNESCO’s November 2021 Recommendation on the Ethics of Artificial Intelligence. There is, therefore, a real possibility that private sector companies developing neurotechnologies will be subject to rules specifically regulating such technologies in the future.
The Recommendation establishes several high-level “values” that it states should underpin work on neurotechnology. These are largely aligned with existing human rights principles, and include the need for the technology to be developed in a way that: respects and promotes human rights, freedoms and dignity; promotes human health and well-being (including by ensuring that resources should be focused on neurotechnology that benefits the largest number of people); ensures respect for diversity and takes account of different cultures; is sustainable; and is based on a high standard of professional integrity.
The Recommendation then goes on to set forth more concrete ethical principles that UNESCO believes are particularly pertinent in light of the nature of neurotechnology. These principles include:
- Proportionality. Use of neurotechnology should be limited to what is appropriate and proportionate to the objectives pursued, consistent with the high-level values outlined above, appropriate to the relevant group of technology users, and based on scientific evidence.
- Protection of freedom of thought. Specifically, the Recommendation is clear that individuals should have the right to choose whether or not to use (or be made subject to) neurotechnology at any time, and consent to the use of the technology must be freely given and informed.
- Privacy. The Recommendation recognizes that neural data is particularly private and can be uniquely sensitive. It states that there should therefore be strict safeguards against misuse of this data.
- Transparency and accountability. All actors involved in the lifecycle of neurotechnology should be transparent about their activities, and should work to anticipate and address potential harms.
- Equity and inclusivity. Broad access to neurotechnology should be promoted, and there is a shared responsibility among all relevant actors in the neurotechnology lifecycle to ensure it does not create new inequalities or reinforce existing inequalities. The Recommendation specifically recommends open and accessible education and engagement related to the technology, and expressly states that neurotechnology should be used to reduce global health inequalities and to improve health “particularly in resource-limited settings.”
- Protection of children and future generations. Consistent with international human rights laws protecting children, the Recommendation states that neurotechnology should be deployed in a way that promotes the holistic development of children, and should therefore be limited to use for medical, therapeutic, or other well-justified purposes.
The Recommendation then goes on to set out a wide range of specific policy proposals that Member States could or should implement to facilitate these principles and values. These focus on matters such as:
- Ensuring suitable oversight of neurotechnology use cases, including that it is not used for social control or surveillance;
- Establishing structures and mechanisms for evaluating the potential impacts of different neurotechnologies across their entire lifecycle;
- Ensuring that data protection, privacy, and cybersecurity frameworks are sufficiently broad and robust to address the potential issues arising from neurotechnology, particularly the collection and use of highly-sensitive neural data;
- Ensuring that the potentially large energy uses of neurotechnology (e.g., through large-scale data centres and associated computing resources) have a sustainable ecological footprint, including through data minimization and ensuring energy efficiency;
- Establishing community education and engagement programmes, including to inform the public about the potential negative impacts of neurotechnology;
- Ensuring that neurotechnology researchers adhere to rigorous ethical standards for research and development;
- Ensuring that vulnerable categories of individuals, including children, older individuals, those with physical disabilities, and those with mental health conditions, are protected from specific types of harms that might be most relevant to them (e.g., prohibiting certain marketing techniques using neural data in relation to children, and incentivizing the development of neurotechnology that enhances the quality of life of those with disabilities); and
- Differentiating between therapeutic and enhancement uses of neurotechnology.
The Recommendation also targets the use of the technology in certain sectors. Health is the main focus, where the Recommendation emphasizes the need for promoting reliable, safe, and durable neurotechnology, robust oversight mechanisms for the physical and mental health impacts of neurotechnology, and strengthening existing reporting systems to track adverse effects. The Recommendation also notes potential risks that may arise in the education and employment context (including that neurotechnology should not focus solely on improving academic achievement, and should respect workers’ rights).
In the commercial and consumer context, the Recommendation emphasizes that existing consumer protection principles should apply equally to neurotechnology. It also raises potential harms that could arise, including the potential for commercial manipulation, and recommends that Member States establish rules, that, among other things, expressly prohibit the use of neural data in recommender systems for manipulative purposes, restrict the use of neural data for “nudging,” prohibit the use of neurotechnology for marketing during sleep, and impose specific rules on neuromarketing.
Finally, the Recommendation states that Member States should prohibit any direct or indirect pressure on individuals to use neurotechnological enhancement, prohibit enhancement uses that undermine human dignity, identity or equal opportunity, and develop guidance on which enhancement applications are acceptable, which are off limits, and which require strict conditions and oversight.
UNESCO has stated that it will support Member States in implementing the Recommendation through tools such as a Readiness Assessment Methodology, Ethical Impact Assessment frameworks, and capacity-building programs. The extent to which Member States begin the work of implementing it remains to be seen, however.
* * *
Covington’s Data Privacy and Cybersecurity Practices monitor legislative developments affecting new and innovative technologies on an ongoing basis. If you have any questions about the Recommendation, or neurotechnology more generally, we are happy to assist.
This post was written with the assistance of Sophia Bor, a trainee in our London office.