Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

  Are you committing malpractice if you don’t use AI?

By Nicole Black on July 21, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

  Are you committing malpractice if you don’t use AI?

It’s safe to say that litigators who fail to understand how social media platforms work in 2025 are committing malpractice. 

Had I stated this in 2010, when the book that I wrote about social media for lawyers was published, most lawyers would have disagreed. After all, social media was still an emerging technology, and unworthy of attention. The majority of lawyers thought it was irrelevant to the practice of law and posed more risks than benefits.

It only took a few years for perspectives to change. Social media became pervasive, and online interactions became pivotal pieces of evidence in litigation. When this happened, lawyers began to change their tune. 

2014  was the first year I was asked to present on mining social media for evidence.  Over the next five years, I was asked to speak about the ethics of using social media at trial at least twelve times. Litigators had an intense interest in learning about this technology because the failure to do so could negatively impact their clients’ cases.

Eventually, it was understood that failing to understand how social media worked–both for purposes of locating online evidence and advising clients about social media use while litigation was pending–could constitute malpractice.

The same pattern occurred with cloud computing. When my book, “Cloud Computing for Lawyers,” was published in 2012, the vast majority of lawyers were reluctant to store confidential data in the cloud for security and ethical reasons. 

However, as the technology advanced and became more ubiquitous, that changed quickly.  Fast-forward to 2025, and most security experts now recommend that premise-based computing be abandoned in favor of cloud computing, which is significantly more secure for the majority of businesses. Cloud computing, once distrusted and avoided, is now the gold standard for cybersecurity in most cases. 

The passage of time changed perspectives about social media and cloud computing.  Rest assured, this will also happen with artificial intelligence (AI),  but much more rapidly. 

The technology is advancing exponentially. Recent surveys show that the vast majority of lawyers are already aware of it, and according to the AffiniPay 2025 Legal Industry Report, 31% of legal professionals are already using generative AI in their daily workflows, less than three years after its general release. 

But is the failure to use it malpractice?

Malpractice occurs when a lawyer fails to meet the standard of care expected of a reasonably competent attorney. It is a standard that changes as technology evolves and becomes widely adopted in law firms. 

This very shift happened with social media and cloud computing, resulting in an expectation that competent attorneys now understand the how and why of these technologies. 

This same trend is occurring with AI, but it’s happening much faster. As these tools become increasingly relied upon for their accuracy and efficiency, failing to use them could rise to the level of negligence in the very near future.

In other words, the legal profession is once again at a crossroads. 

History offers a clear roadmap. Just as ignoring social media or cloud computing eventually became indefensible, failing to grasp and appropriately use AI tools may soon carry the same professional risks. 

The pace of adoption is far quicker this time, and the expectations placed on lawyers are evolving just as rapidly. Courts, clients, and regulators won’t expect perfection, but they will expect awareness, informed judgment, and responsible use. Lawyers don’t need to become technologists, but the ethical duty of technology competence requires them to understand how AI is–and will–impact the practice of law. 

As AI becomes more integrated into legal workflows, legal professionals who refuse to adapt may not only compromise their effectiveness on behalf of their clients but also risk falling below the standard of care. The question is no longer whether lawyers should use AI–it’s whether they can afford not to.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of “Cloud Computing for Lawyers” (2012) and co-authors “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association. She also co-authors “Criminal Law in New York,” a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.

  • Posted in:
    Technology
  • Blog:
    Sui Generis
  • Organization:
    MyCase
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo