As artificial intelligence (AI) shapes businesses and investor expectations, securities class actions (SCAs) tied to AI are drawing heightened attention. Nevertheless, the data emerging from these cases to date is telling a more grounded story. Despite the noise around AI technology, related SCAs largely mirror the patterns and outcomes of most traditional shareholder litigation.
This article discusses the wave of SCAs filed since 2020 that relate in some way to the use of AI and then considers potential implications for companies that purchase directors and officers (D&O) liability insurance.
As noted in many articles published by law firms and other commentators, the rise of AI has led to a wave of related SCAs.[1] Of course, companies face other AI-related risks that are beyond the scope of this article, such as the risk of regulatory investigations; lawsuits by customers or clients alleging AI-related errors arising from the provision of professional services that result in economic damages, property damage, and/or bodily injury; and lawsuits alleging AI-related privacy violations.
Specific to SCAs, one commentator observed that in today’s economic environment, investors often have been willing to pay higher share prices for companies that are well-positioned to take advantage of AI technologies. But shareholder plaintiffs’ firms have not hesitated to file SCAs when the facts did not match public disclosures, including risk disclosures, or projections.[2]
Through H1 2025, the Stanford Law School Securities Class Action Clearinghouse (SCAC) has identified 53 SCAs related in some way to AI.[3] The lawsuits have arisen in a variety of contexts, such as ordinary public disclosures relating to operations, revenues and forecasts; disclosures specific to IPOs, de-SPAC transactions, and other mergers and acquisitions transactions; and stock drops following the issuance of reports by analysts and short sellers.
Many of the lawsuits allege some sort of AI-washing, i.e., some alleged misrepresentation about the company’s capabilities related to AI or potential revenues arising from AI, including but not limited to the following types of allegations:
But SCAs have arisen in other contexts, as well. For example:
The number of AI-related SCAs filed per year notably increased in 2024 and through the first half of 2025.
| Year | Number of lawsuits filed |
|---|---|
| 2020 | 5 |
| 2021 | 8 |
| 2022 | 6 |
| 2023 | 7 |
| 2024 | 15 |
| H1 2025 | 12 |
The majority of SCAs have targeted companies in the tech industry (33), followed by services (10) and financial (4).
Various courts in California (18) and New York (15) have been the most common venues. Only two lawsuits have been filed in Delaware.
According to the SCAC, 10 of the SCAs were dismissed. It is premature to compare this figure to overall dismissal rates, as motions to dismiss in several cases, both AI-related and otherwise, remain pending.
Seven of the lawsuits have been settled. But only one case filed after 2022 has been settled.
The median settlement is $11.5 million. The average settlement is approximately $38.4 million; however, excluding the largest settlement ($189 million in a case involving issues relating to public safety[4]), the average settlement drops to $13.3 million. For context, the second-largest settlement was $39 million.
By comparison, using Cornerstone data, the average settlement for all SCAs between 2020-2024 is $39.6 million and the approximate median is $12 million.
It is too early to draw conclusions from these figures, as the number of affected cases is still small. But this initial data suggests that SCAs alleging AI-related wrongdoing are largely emulating the outcomes of SCAs more broadly. Stated differently, heightened discourse and bigger headlines surrounding the technology are not translating into larger dollar SCA settlements; if anything, after excluding the one large settlement, average settlements have been somewhat lower.
D&O insurers track AI-related exposures closely, and their approach to the risks will no doubt evolve as actual losses become more measurable. As a result, insureds should anticipate that, during their renewal cycles, insurers may ask more questions about AI-related operations, use, disclosures, risks, controls, and board involvement in AI issues. The depth of the questioning and follow-on scrutiny may vary depending on the risk profile, including industry.
As to coverage specifically, AI-related SCAs do not appear to present new or novel coverage issues beyond the issues that may arise in any type of SCA. We would anticipate most claims relating to AI-washing or breach of fiduciary duty claims to fall within the scope of most D&O policies’ coverage, subject to the policies’ terms and conditions.
Insureds should consider, however, whether common lane marker exclusions intended to distinguish between D&O claims and claims under other lines of coverage include carve-outs clarifying that the exclusions do not apply to securities claims (such as exclusions for bodily injury, property damage, professional services, and privacy violations). In addition, private companies would be well-advised to consult with their broker on the possible impact of cyber exclusions on AI-related allegations — as may be the case, for example, if AI models were to cause a data breach, or if AI systems were hacked or manipulated or were to misuse personal data. Note: public company policies do not customarily include cyber exclusions.
AI-specific exclusions remain rare; however, for private companies, they do exist.[5]
Another issue to consider is whether those leading AI development or similar officials within organizations are “officers” under traditionally worded D&O policies. The issue is mostly unique to public companies where non-“officer” employees are covered only for “securities claims” or, in some cases, on a co-defendant basis with directors and officers. This is not an issue with private company D&O policies that cover all organization employees without regard to claim type.[6]
As always, insureds should confer with their advisors and brokers on specific policy terms that may impact the availability of coverage. Policies vary widely.
While AI may be the latest catalyst for shareholder litigation, the underlying risk dynamics remain familiar for now. D&O policies should generally respond to AI-related SCAs as expected, but it is essential to pay careful attention to exclusions and other limitations, as well as possible, but still rare, AI-specific wording. As with any evolving risk exposure, informed board and senior leadership could play an important role in whether these claims remain routine or begin to evolve into something new and possibly more dangerous.
WTW hopes you found the general information provided here informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).