The rise of artificial intelligence (AI) presents just about every enterprise with both opportunities and risks. AI also represents a challenge for companies and their boards as the companies seek to incorporate AI Into operations, functions, and processes. Because of AI’s potentially disruptive impact in many industries and for individual companies, many boards may find themselves under scrutiny for the way they address the risks associated with AI. All of which raises the question of the appropriate ways for boards to address and manage the AI-associated risks, a topic discussed in a July 22, 2024, Harvard Law School Forum on Corporate Governance post by attorneys from Debevoise & Plimpton law firm entitled “AI: Are Boards Paying Attention?” (here).
The law firm’s article starts with a review of a recent ISS study, in which ISS analyzed proxy statements of S&P 500 companies filed between September 2022 and September 2023. The study found that over 15% of companies disclosed board oversight of AI. However, less than 13% of companies had at least one director that ISS considered as having AI expertise. The study also found that few companies have made AI the sole focus of a specially created committee but that instead most companies delegate AI oversight to an existing committee, such as the audit committee. The study also suggested as AI becomes increasingly important, institutional investors are likely to expect companies, particularly those in industries heaving impacted by AI, to establish appropriate processes for board oversight of AI risks and opportunities.
In addition to this likely expectation from institutional investors, the SEC has also signaled that it is closely watching the ways that companies are addressing AI. In addition to public statements raising concerns about AI-related disclosures (as noted, for example, here), the SEC has also brought AI-related enforcement actions (most recently noted here), showing that the agency is monitoring the ways that companies are integrating AI into both their operations and their disclosures. And as I have also noted on this blog, private litigants have also filed AI-related securities class action lawsuit (as noted most recently here).
With all of this scrutiny and these expectations, well-advised boards will be considering steps they can take in order to be a position to show that they are sufficiently addressing AI-related risks and opportunities. The question of the appropriate steps for boards to take with respect to AI is one that has some urgency, as discussed in a separate October 7, 2023 post on the Harvard Law School Forum on Corporate Governance by the Sidley Austin law firm and entitled “AI and the Role of the Board of Directors” (here). The authors suggest a variety of steps for boards to take in order to be best positioned to address the AI-related risks and opportunities. The steps boards may want to consider taking consist of some combination of the following.
Oversight: Boards will want to understand how AI is being used within the company and within the company’s industry. Boards will also want to understand and identify the risks associated with AI, including ethical, legal, and operational risk. In thinking about what process to use to fulfill these oversight roles, boards may want to keep the oversight function within the full board; an existing committee; or a newly formed committee dedicated to AI.
Compliance and Governance: Boards should have oversight and reporting processes designed to ensure that the company complies with relevant laws and regulations related to AI and upholding ethical standards in deploying AI, including fairness, transparency, and accountability.
Stakeholder Impact: The board will also want to consider the impact of AI on various constituencies, including, in particular, the company’s work force, but also its customer and clients, as well as the social implications.
As they seek to fulfill these responsibilities, many boards will confront concerns about having the requisite expertise. Many boards will conclude that they should engage in continuing education for existing directors and even adding one or more directors with relevant experience. Directors may also want to consider consulting with AI experts about AI technology and its implications.
In order to substantiate their efforts, boards will, as the authors of the Debevoise law firm memo note, also want to ensure that the AI-related oversight activities, including also management’s compliance efforts, are well documented in board minutes, including among other things periodic risk assessments and monitoring of AI-systems.
There is some very important context for all of these comments about boards’ AI-related oversight responsibilities, and that is that there have been in recent years important developments in the Delaware courts with respect to the oversight duties of directors. And while none of the cases (at least so far) have directly addressed boards responsibilities for AI oversight, the cases in general do underscore the importance for boards of exercising their oversight responsibilities, particularly with respect to mission-critical operations and functions. As I noted above, both the SEC and private litigants have demonstrated a willingness to pursue AI-related claims, suggesting that among other AI-related concerns for corporate boards are the liability-related risks with respect to AI. Well-advised boards will want to ensure that their actions with respect to AI can withstand this kind of scrutiny.