Colorado is the latest state to introduce a bill focused on consumer protection issues when companies develop AI tools. The bill imposes obligations on developers and deployers of AI systems. Additionally, the bill provides an affirmative defense for a developer or deployer if the developer or deployer of the high-risk system or generative system involved in a potential violation: i) has implemented and maintained a program that complies with a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates; and ii) the developer or deployer takes specified measures to discover and correct violations of the bill. The obligations imposed adhere to responsible AI policy, including adopting and documenting policies to avoid algorithmic discrimination, requiring transparency and documentation of the design, data and testing used to build AI tools, avoiding copyright infringement, marking and disclosing to consumers that the synthetic content output was generated by AI tools. The bill also requires disclosure of risks, notifications if the tool makes a consequential decision concerning a consumer and other disclosures.

Among other things, the bill:

  • Provides criteria (detailed below), which if followed by a developer of “high-risk” artificial intelligence systems, creates a rebuttable presumption that the developer used reasonable care to discharge their duty to avoid algorithmic discrimination.
  • Provides criteria (detailed below), which if followed by a deployer of a high-risk system, creates a rebuttable presumption that the deployer used reasonable care to avoid algorithmic discrimination in use of the high-risk system;
  • Imposes obligations on developers of a “general purpose” artificial intelligence model to i) create and maintain specified documentation for the general-purpose model; and ii) create, implement, maintain, and make available to deployers who intend to integrate the general-purpose model into the deployers’ artificial intelligence systems certain documentation and information;
  • Imposes obligations on developers of an AI system or a general-purpose model that generates or manipulates synthetic digital content (i.e., a generative AI tool) obligations to: i) ensure that the outputs of the tool are marked in a machine-readable format and detectable as synthetic digital content; and ii) ensure that the developer’s technical solutions are effective, interoperable, robust, and reliable;
  • Imposes obligations on a deployer of a generative AI tool to disclose to a consumer that the synthetic digital content has been artificially generated or manipulated.

The approach taken in this bill is interesting in that it imposes both obligations on developers and deployers of AI, while providing presumptive safe harbors and defenses. While some specific criteria is set forth in the bill, the state has maintained flexibility to incorporated designated nationally or internationally recognized risk management framework (e.g., such as the NIST AI Risk Management Framework).

A “deployer” means a person doing business in this state that deploys a generative artificial intelligence system or a high-risk artificial intelligence system.

A “developer” means a person doing business in this state that develops or intentionally and substantially modifies a general-purpose artificial intelligence model, a generative artificial intelligence system, or a high-risk artificial intelligence system.

The criteria for the rebuttable presumption that a developer used reasonable care includes:

  • Making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system;
  • Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of each of these high-risk systems; and
  • Disclosing to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.

The criteria for the rebuttable presumption that a deployer used reasonable care includes:

  • Implementing a risk management policy and program for the high-risk system;
  • Completing an impact assessment of the high-risk system;
  • Notifying a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;
  • Making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys and how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems; and
  • Disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused or is reasonably likely to have caused.

The obligations on developers of a “general purpose” artificial intelligence model include: 

  • Creating and maintaining: i) A policy to comply with federal and state copyright laws; and ii) A detailed summary concerning the content used to train the general-purpose model; and
  • Creating, implementing, maintaining, and making available to deployers documentation and information that: i) enables the deployers to understand the capabilities and limitations of the general-purpose model; ii) discloses the technical requirements for the general-purpose model to be integrated into the deployers’ artificial intelligence systems;
  • Discloses the design specifications of, and training processes for, the general-purpose model, including the training methodologies and techniques for the general-purpose model;
  • Discloses the key design choices for the general-purpose model, including the rationale and assumptions made;
  • Discloses what the general-purpose model is designed to optimize for and the relevance of the different parameters, as applicable; and
  • Provides a description of the data that was used for purposes of training, testing, and validation, as applicable.

With great power comes great responsibility. AI embodies great power. Developers and deployers need to employ responsible AI principles. One of the best ways to do this is to implement and enforce written AI policies that consider existing law and best practices.