
After the adoption of the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (also called Framework Convention on Artificial Intelligence or AI convention) Andorra, Canada, the European Union, Georgia, Iceland, Israel, Japan, Liechtenstein, the Republic of Moldova, Montenegro, Norway, San Marino, Switzerland, Ukraine, the United Kingdom, and the United States have expressed their opinion with signature.
On 28 November 2024, the Council of Europe’s Committee on Artificial Intelligence (CAI) took a significant step forward by adopting the risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law commonly known as the HUDERIA Methodology. This approach isn’t about laying down the law instead, it offers practical, non-legally binding guidance to help both public and private organizations get a handle on the risks tied to AI systems. At its center, HUDERIA is about understanding how AI might touch or challenge human rights, democracy, and the rule of law.
Intro to the Main Provisions
The HUDERIA Methodology was crafted as part of the Council of Europe’s broader response to the dynamic questions and challenges that AI technologies bring to the table. Its real goal is to help governments, institutions, and companies spot, weigh, and lessen the potential risks of AI systems. Whether it is just starting out with an AI project, rolling it out, or focusing on it, HUDERIA offers a flexible framework that supports multi-tenant decision-making at every stage of the AI journey.
It’s built on the idea that AI applications aren’t one-size-fits-all—they come in all shapes and sizes, with different designs, goals, and impacts on society. The reason why HUDERIA is designed to stay relevant no matter what kind of AI technology mankind dealing with or what kind of environment it’s operating in.
The methodology itself is made up of four key, interconnected steps that guide users through a thoughtful process for risk assessment and mitigation. These steps are there to make sure any AI system being considered or used is looked at from every angle and treated responsibly.
Context-Based Risk Analysis (COBRA) as a step is about gathering and mapping out the essential details about the AI system and the world it operates in. COBRA helps community involved understand how the AI system will interact with its surroundings, spot any risks to human rights, democracy, and the rule of law, and figure out if deploying the AI system is really the right and balanced choice for the problem at hand.
Stakeholder Engagement Process (SEP) is where HUDERIA brings people into the conversation, especially those who might be affected by the AI system, whether directly or indirectly way. SEP is about making sure everyone’s voice is heard and that the concerns of impacted communities really count.
Risk and Impact Assessment (RIA) comes in the ground when the risks are spotted and gathered everyone’s input, this step is about evaluating just how serious determined risks are. RIA gives a structured way to look at how likely and how severe the consequences of using the AI system might be. This impacts on decision making process whether to move forward with the system, or put some extra checks in place.
Mitigation Plan (MP) is based on what are learned from the RIA, this final step is about putting together a plan to prevent or reduce the risks identified. The plan might include technical changes, policy updates, or support for people who’ve been affected. And because things change, this phase is designed to be revisited and updated regularly to stay effective over time.
Main Challenges and Opportunities for Implementation
HUDERIA is not a law, it does not have legal roots which necessitate that parties follow it. Regardless, it serves a purpose beyond offering simple guidance. It endeavors to assist individuals in interpreting actions in regards to AI risks in a sensible manner. It allows states, organizations, and jurisdictions the freedom to adopt, integrate, or implement HUDERIA in a manner that is suitable to their systems, context, order, and framework. There is no doubt that every country will have its legal, social, technological, and cultural framework, and this diversity makes the application of HUDERIA complex. Relying on a singular model is not realistic, and the incorporation of HUDERIA will require great effort, legal work, and community insight. This will certainly be more difficult for smaller organizations and those preoccupied with little flexibility in funds. In many cases, states and institutions have their own frameworks and processes for risk assessment. Attempting to reorganize existing methodologies and align them with HUDERIA is a challenge that involves the risk of conflict between terms and concepts.
But for all the challenges, HUDERIA brings some real advantages to the table. For one, it encourages a systematic and transparent approach to risk assessment, helping organizations show they’re acting responsibly when it comes to designing, deploying, and overseeing AI systems. Its focus on human rights, democracy, and the rule of law means that AI is less likely to undermine our values and more likely to support them.
Why HUDERIA Could Become Customary International Law?
Even though HUDERIA isn’t legally binding, if enough countries start using it regularly, it could gradually shape customary international law in the world of AI governance. In international law, customary norms come from the consistent and general practice of states, followed because people feel a sense of legal obligation (what’s called opinio juris). As described by Dinah L. Shelton, as a general matter, soft law may be categorized as primary and secondary. Primary soft law consists of those normative texts not adopted in treaty form that are addressed to the international community as a whole or to the entire membership of the adopting institution or organization. Such an instrument may declare new norms, often as an intended precursor to adoption of a later treaty, or it may reaffirm or further elaborate norms previously set forth in binding or non-binding texts.
If states keep turning to HUDERIA for assessing and managing AI risks, it could become the “de facto” international standard. This would help fill in the gaps left by the Framework Convention and could be especially influential in areas where formal legal instruments are still being worked out.
Additionally, as argued by Christine M. Chinkin in her article on the challenge of soft law, there is an awareness that economic activities, whether performed in the domestic or international plane, cannot always be appropriately regulated by legislation or other forms of hard law. (The International and Comparative Law Quarterly, Vol. 38, No. 4 (Oct., 1989), pp. 850-866) In this case we can even suggest that HUDERIA could become something like a “Berkeley Protocol” for AI systems—a globally recognized benchmark for ethical and rights-based AI governance. That would mean HUDERIA isn’t just a helpful tool, but potentially a cornerstone for emerging global norms in AI.
As more countries adopt and implement HUDERIA, certain parts of the methodology are likely to become the minimum expected practices—things like doing a structured analysis of risk factors, mapping out how AI systems might affect human rights, democracy, and the rule of law, and consistently and transparently determining the severity and likelihood of those risks.
These foundational steps could well become the baseline requirements for responsible AI development, both at the national and international levels.
Conclusion
The HUDERIA Methodology is a big step forward in the Council of Europe’s efforts to promote ethical and rights-based AI development. Even though it’s not legally binding, HUDERIA gives us a comprehensive, structured, and adaptable way to spot and manage the risks of AI systems. Its focus on protecting human dignity, supporting democratic institutions, and upholding the rule of law fits perfectly with the world’s core mission on AI.
By encouraging widespread use and adaptation of HUDERIA, states and organizations have a real chance to help shape the common practices that will keep societal values safe in the age of AI. Its growing influence could turn it from a soft law instrument into a cornerstone of customary international law, helping make sure that AI serves humanity in a just, transparent, and accountable way.
The post Upshot of the AI Treaty: HUDERIA appeared first on Briefly: The Law Blog.