Keypoint: Companies onboarding AI products and services need to understand the potential risks associated with these products and implement contractual provisions to manage them.
With the rapid emergence of artificial intelligence (AI) products and services, companies using these products and services need to negotiate contractual provisions that adequately address the unique issues they present. However, given that this area is new and rapidly emerging, companies may not appreciate that the use of AI may raise unique contractual issues. Even if companies do realize it, they may not know what those provisions should state. In addition, many AI-related contractual terms are complicated and confusing, oftentimes containing new terms and definitions that companies are unfamiliar with handling.
In the below article, we identify key considerations when reviewing or preparing AI-related contracts. Although there may be other considerations depending on the specific use case, the below considerations should provide the reader with a useful starting point for how to address this issue.
Due Diligence
As a starting point, companies onboarding a new AI-related vendor should conduct a risk assessment of the vendor and its product/service. The risk assessment should identify information such as the specific use case and business reason for using the product, the product/service’s inputs and outputs, whether the product is being used for a high-risk processing activity, and the vendor’s access to company data. If the vendor insists on using its contractual terms, the analysis also should identify whether those terms are negotiable and, if not, whether the company is willing to assume the risk of whatever terms are presented. If the vendor is a start-up, will the company be left holding the bag if the vendor closes shop in the face of third-party litigation or regulatory investigations?
Ultimately, the company’s use case for the AI product/service will dictate what contractual terms are the most significant. For example, if the company will use a vendor to create marketing content, then intellectual property considerations will prevail. If the product will be used to review resumes, then bias considerations will prevail. If the vendor will analyze the personal information of employees or customers, then privacy considerations will prevail.
Definitions of Key Terms
Although specific terms will depend on the exact use case, terms that typically require definitions are artificial intelligence (or a similar term like AI technology), generative AI, inputs, and outputs. Defining artificial intelligence is particularly important given that it establishes the scope of all obligations. The prevailing definition from the Organisation for Economic Co-operation and Development (which is used, for example, in the Colorado AI Act) defines AI broadly. Generative AI is a subset of that broad definition where AI is used to generate content.
“Third party offerings” is another common and significant term if the vendor’s product/service will be used in combination with a different vendor’s product/service. This is a common occurrence as many AI products/services are built on another vendor’s product/service such as OpenAI. The underlying vendor’s terms may alter or nullify any warranties or indemnification provisions and, therefore, require close review.
Ultimately, understanding exactly what the product/service does (and does not) do and aligning the definitions is critical.
Inputs and Outputs
In addition to defining the key terms, the contract should address obligations and rights regarding inputs (i.e., what information goes into the AI) and outputs (i.e., what information comes out of the AI).
With respect to inputs, companies need to consider what data will be provided, whether it will be secured by the vendor, and whether privacy or business proprietary considerations come into play. For example, if the company will input customer data, the contract should address privacy considerations and a data processing agreement may be appropriate. If the company will input business proprietary information, the contract should require the vendor to keep that information confidential and use it only for the company’s business purposes.
The contract also should address how the vendor can use and share the data, including whether it can use the data to improve or train its product. For example, Salesforce is currently running an ad campaign with Matthew McConaughey called “The Great Data Heist.” The premise is that the AI vendor market is currently the AI Wild West where the “bad guys” only want customer data and “will do anything to get it.” Salesforce ends the commercial by stating: “Salesforce AI never steals or shares your customer data.” The fact that Salesforce is willing to spend tens of millions of dollars on this ad campaign should be a signal that this is an important topic to address with AI vendors.
Relatedly, depending on the scope of the data shared with vendors, companies should consider adding data breach notification and defense/indemnity clauses if they are not already addressed in the contract or data processing agreement. It is not difficult to imagine that these AI products and services will be a new threat vector for hackers.
For outputs, the contract should address which contracting entity owns the outputs. For example, Microsoft recently updated its consumer Services Agreement to, according to its FAQs, expand “the definition of ‘Your Content’ to include content that is generated by your use of our AI services.” In other words, Microsoft recognizes that the user – and not Microsoft – owns the output.
Legal Compliance
With the emergence of state and international laws regulating the use of AI such as the EU AI Act and Colorado AI Act, companies that engage in activities subject to those laws will need to add contractual obligations that address the laws’ requirements. Similarly, companies that are federal contractors, need to monitor Presidential Executive Orders, agency regulations, and federal procurement guidelines to confirm their use of the contemplated AI technologies will comply with the requirements under the federal contracts.
At a minimum, any contract with an AI developer should obviously require the developer to comply with applicable laws. However, depending on the use case, additional provisions may be appropriate. For example, the Colorado AI Act (effective February 1, 2026) requires deployers (entities that use an AI product/service) to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” Yet, deployers may need to rely on developers to test and validate that the AI product/service does not create unlawful discrimination. In that event, the deployer should contractually require the developer to represent and warrant that the AI product/service does not create unlawful bias and link that representation to the defense and indemnity provisions.
The Colorado AI Act also contains provisions requiring deployers to, among other things, create impact assessments, provide notices, and allow for appeals under certain circumstances. Companies contracting with AI developers should consider whether the developer should assist the company with complying with these obligations or even be entirely responsible for compliance. Indeed, there is nothing in the Colorado AI Act that prohibits a deployer from contractually shifting obligations to developers.
Finally, companies deploying AI products in the United States should address the likelihood (if not certainty) that new laws will go into effect during the term of the contract. These new laws could include not only additional requirements (e.g., providing a right to opt out to consumers), but also could regulate more types of AI uses. For example, the Colorado AI Act’s provisions primarily apply to “high risk artificial intelligence systems,” which are AI systems that, when deployed, make or are a substantial factor in making a consequential decision. The definition of “consequential decision” includes activities such as financial or lending services, insurance, healthcare services, and housing (among others). Although a current use case may not result in a “consequential decision” that does not mean that another state will not enact an AI law that expands the scope.
Intellectual Property
AI vendor contracts also should address intellectual property (IP) considerations. Every contract should address IP ownership between the parties, including, ownership of the AI, all input and output, and any training data. If a company provides inputs or prompts to the AI product/service, then the company will likely want to maintain its ownership rights over that input or prompt. Additionally, if a company’s inputs or prompts are used by the AI product/service to create any output, then the company will likely want ownership rights over any output, including any work product or deliverable created from that output.
Another ownership consideration is whether the AI vendor’s product or service relies on a third-party’s technology. As noted, in the space currently, many vendors are relying on third-party technology for their own AI models. Companies should require vendors to represent and warrant that the vendor has the right to use the third party’s technology through a license and shall comply with all use restrictions under that license. Any representation and warranty should also make it clear that the vendor has full power and authority to grant the rights under the contract to the company.
Finally, for all AI products/services, vendors should also represent and warrant that the products/services will not misappropriate, violate, or infringe any third-party IP rights. Companies should consider indemnification protection for any claims that result from the misappropriation, violation, or infringement of any third-party IP rights and corresponding liability for any indemnification obligation.
AI and Workforce Considerations
Depending on the use case, companies also should consider how the AI product/service will be viewed by the workforce and whether internal controls are necessary. Labor groups and employee advocates have expressed concerns about the rapid spread of AI systems within businesses. AI technologies are used to recruit employees, determine performance ratings, determine candidate redundancies, allocate work, and monitor the productivity of employees working remotely from home.
Effective AI systems require more than just strict data governance. A key factor for obtaining workforce buy-in with AI is to take a people-centric view in the design and implementation of the AI technologies, where workers feel empowered by AI, and the AI helps humans to do a better job and be more satisfied.
Liability
Finally, liability-shifting terms are pivotal in any AI vendor contract as AI regulations emerge and increased public awareness of AI’s impacts could lead to litigation over AI services and products. As noted, new AI regulations impose certain obligations on AI developers (vendors) and AI deployers (the vendor’s customers) but in the parties’ contract, those and other obligations may be shared or shifted to one party. Companies will therefore need to scrutinize the vendor’s warranties and disclaimers, and whether (or in what circumstances) the vendor will indemnify the company if the AI service/product does not comply with the law.
The impetus to carefully analyze warranty, disclaimer, and indemnity provisions is also triggered by the risks of private litigation. In the context of employment, the Equal Employment Opportunity Commission (EEOC) has made it clear that companies using AI products/services for employment decisions, can still be liable under employment discrimination laws even where the product/service is fully developed or administered by a third party vendor. Conversely, a California court just recently held that a human resources vendor using AI to screen job applicants for the vendor’s customers could be liable for the screening tool’s discriminatory impact on applicants. As a reaction to this decision and similar litigation that is sure to arise, vendors will likely aim to place the liability on the companies they contract with when it comes to discriminatory effects of the vendor’s AI product/service.