Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

AI Use + Data Security: A Growing Gap

By Linn Foster Freedman on January 22, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

A recent report published by Cyera entitled “State of AI Data Security: How to Close the Readiness Gap as AI Outpaces Enterprise Safeguards,” based on a survey of 921 IT and cybersecurity professionals, finds that although 83% of enterprises “already use AI in daily operations…only 13% report strong visibility into how it is being used.” The report concludes:

The result is a widening gap: sensitive data is leaking into AI systems beyond enterprise control, autonomous agents are acting beyond scope, and regulators are moving faster than enterprises can adapt. AI is now both a driver of productivity and one of the fastest expanding risk surfaces CISOs must defend.

The survey results show that although AI adoption in companies is rapid, most enterprises are “blind to how AI interacts with their data.” This is complicated by the fact that autonomous AI agents are difficult to secure and very few organizations have prompt or output controls, including the ability to block risky AI activity by employees.

In addition, most of the respondents acknowledged that AI tools used in the organization are “over-accessing data.” This is further complicated by the fact that a small minority of those surveyed (7%) have a “dedicated AI governance team, and just 11% feel fully prepared for regulation.”

The conclusion is: “the enterprise risk surface created by AI is expanding far faster than the governance and enforcement structures meant to contain it.”

We have previously commented on how important AI Governance Programs are in mitigating the risks associated with AI use in an organization. The Cyera Report reiterates that conclusion. If you are one of a vast majority of organizations who have not developed an AI Governance Program yet, it’s time to make it a top priority.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.

Read more about Linn Foster Freedman
Show more Show less
  • Posted in:
    Intellectual Property
  • Blog:
    Data Privacy + Cybersecurity Insider
  • Organization:
    Robinson & Cole LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo