On September 18, 2024, the Texas Office of the Attorney General (“OAG”) announced that it reached “a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology called Pieces Technologies” (“Pieces”) to resolve “allegations that the company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its products.”

According to the press release, “at least four major Texas hospitals have been providing their patients’ healthcare data in real time to Pieces so that its generative AI product can ‘summarize’ patients’ condition and treatment for hospital staff.”  Pieces developed “a series of metrics to claim that its healthcare AI products were ‘highly accurate,’ including advertising and marketing the accuracy of its products and services by claiming an error rate or ‘severe hallucination rate’ of  ‘<1 per 100,000.’”  The OAG claimed that its “investigation found that these metrics were likely inaccurate and may have deceived hospitals about the accuracy and safety of the company’s products” in violation of the Texas Deceptive Trade Practices Act.

Among other restrictions, the settlement requires that Pieces provide customers with “documentation that clearly and conspicuously discloses any known or reasonably knowable harmful or potentially harmful uses or misuses of its products or services,” including:

  • “the type of data and/or models used to train its products and services;”
  • an “explanation of the intended purpose and use of its products and services, as well as any training or documentation needed to facilitate proper use of its products and services;”
  • “any known, or reasonably knowable, limitations of its products or services, including risks to patients and healthcare providers;” and
  • “any known, or reasonably knowable, misuses of a product or service that can increase the risk of inaccurate outputs or increase the risk of harm to individuals.”

Pieces also agreed under the settlement to include certain disclosures in its marketing or advertising if it includes statements “regarding any metrics, benchmarks, or similar measurements describing the outputs of its generative AI products” such as how the standards were calculated.

The OAG signaled that healthcare AI may become an enforcement priority. 

Photo of Yaron Dori Yaron Dori

Yaron Dori is co-chair of the Communications & Media Practice Group. He practices primarily in the area of telecommunications, privacy and consumer protection law, with an emphasis on strategic planning, policy development, commercial transactions, investigations and enforcement, and overall regulatory compliance. Mr. Dori…

Yaron Dori is co-chair of the Communications & Media Practice Group. He practices primarily in the area of telecommunications, privacy and consumer protection law, with an emphasis on strategic planning, policy development, commercial transactions, investigations and enforcement, and overall regulatory compliance. Mr. Dori advises clients on, among other things, federal and state wiretap and electronic storage provisions, including the Electronic Communications Privacy Act (ECPA); regulations affecting new technologies such as online behavioral advertising; and the application of federal and state telemarketing, commercial fax, and other consumer protection laws to voice, text and video transmissions sent to wireless devices and alternative distribution platforms. Mr. Dori also has experience advising companies on state medical marketing privacy provisions, and, more broadly, on state attorney general investigations into a range of consumer protection issues.