On December 1, the Washington State AI Task Force (“Task Force”) released its Interim Report with AI policy recommendations to the Governor and legislature. Established by the legislature in 2024, the Task Force is responsible for evaluating current and potential uses of AI in Washington and recommending regulatory and legislative actions to “ensure responsible AI usage.”
The Interim Report notes that the federal government has largely maintained a “hands-off approach” to the AI sector, creating a “crucial regulatory gap that leaves Washingtonians vulnerable.” Building on the findings in a 2024 preliminary report, and in the absence of “meaningful federal action,” the Interim Report identifies several recommendations for balancing the promotion of technological innovation with the protection of individual rights, privacy, and economic stability, including:
- Adoption of NIST AI Principles. The Task Force recommends that Washington formally adopt the principles for ethical and trustworthy AI in the National Institute of Standards and Technology (NIST)’s 2023 AI Risk Management Framework as the “guiding policy framework” for the development, deployment, and use of AI in Washington.
- AI Developer Transparency and Disclosure Requirements. The Task Force recommends, among other things, requiring AI developers to make information publicly available regarding the provenance, quality, quantity, and diversity of datasets used for training AI models, including explanations of the sources of data and methods of data acquisition, the types and volume of data processed, and the processes used to prepare and annotate data prior to processing. The Task Force further recommends requiring disclosures about how training data is processed to mitigate errors and biases during AI model development, with appropriate protections for trade secrets and proprietary information protected by law.
- AI Governance Requirements for Developers and Deployers. The Task Force distinguishes between “low-risk” and “high-risk” uses of AI, and describes “high-risk AI systems” as those with the potential to significantly impact people’s lives, health, safety, or fundamental rights. The report recommends mandating that developers and deployers of high-risk AI systems adopt and implement recognized AI governance frameworks, such as the NIST AI Risk Management Framework and ISO/IEC 42001, and publicly disclose their risk management practices and risk mitigations. The Task Force also calls on the legislature to “carefully evaluate” whether high-risk uses of AI should require “additional safeguards, restrictions, or outright bans.”
- AI in Education. The Task Force recommends investment in education related to AI, as well as financial support for educators and students to integrate AI tools into their curriculum.
- AI and Healthcare Regulations. Among other recommendations, the Task Force calls for legislation requiring that any decision to deny, delay, or modify health services based on a determination of medical necessity be made only by qualified clinicians, while permitting the use of AI to facilitate, but not as the “sole means” for, such decisions. According to the Task Force, any AI tools used to facilitate prior authorization requests should be required to apply the same clinical criteria as licensed healthcare professionals.
- AI Workplace Guidelines. In addition to creating a “multi-stakeholder advisory group” to establish “AI workplace guiding principles,” the Task Force recommends requiring employers to disclose when AI is being “used in ways that directly affect employees,” including uses of AI for employee monitoring, discipline, termination, and promotion.
The Task Force’s Final Report is due by July 1, 2026, and is expected to contain additional recommendations related to AI companion chatbot safeguards and climate and energy impacts of AI infrastructure. The Task Force also is considering additional recommendations regarding use of AI in education, labor, consumer protection, and healthcare. Task Force subcommittee meetings are open to the public with public comments accepted at least 24 hours in advance; written comments are accepted at any point. If the Washington legislature enacts legislation codifying some or all of these recommendations, it would join California, Texas, and other states that have enacted new state AI laws in recent years.