Skip to main content
Choose your location
Select the location and the language that you prefer
main content, press tab to continue
Article | FINEX Observer

Sarbanes Oxley and the AI governance gap: D&O insurance considerations

By John M. Orr | March 11, 2026

AI is advancing faster than SOX governance, creating oversight and control gaps that raise risks for financial reporting, directors, officers and D&O insurance exposure.
  1. 01

    Control design lag

    AI tools are sometimes deployed before controls are updated or documented to reflect how the tool works and how its outputs are reviewed.

  2. 02

    Explainability constraints

    Some AI models make it difficult for management or auditors to clearly demonstrate how a control operates or why a particular output was generated.

  3. 03

    Data quality and model drift

    AI systems depend heavily on consistent, accurate data. Over time, as real‑world conditions change, model drift can set in, such that its accuracy may decline.

Implications for directors and officers

AI can increase efficiency and strengthen certain elements of financial reporting, but it does not supplant the fiduciary oversight responsibilities of directors and officers.

Board members are expected to understand, at a high level, how AI tools influence financial processes. Technical expertise may not be required, but asking the right questions is. Examples include:

  • How reliable is the model?
  • How is data quality ensured?
  • What documentation supports the process?
  • What happens when the system produces unexpected or inconsistent outputs?

For officers signing SOX certifications, the use of AI may reset the bar for what constitutes a “reasonable basis” for asserting that internal controls are effective. If significant parts of the ICFR rely on AI tools that have not been validated or monitored, officers may face heightened scrutiny over the adequacy of their diligence.

As AI adoption grows, plaintiffs and regulators are likely to argue that failing to monitor these systems or disregarding early warning signs may constitute a lapse in oversight.

D&O insurance: Exposure considerations

AI‑related control failures may introduce new pathways for D&O exposure. If issues arise with the output of AI‑driven analytics or forecasts that lead to stock drops, securities class action plaintiffs might allege that management failed to adequately disclose risks associated with AI‑enabled processes.

In addition, derivative actions may target boards for alleged breaches of fiduciary duty by not establishing basic governance structures around AI or for ignoring red flags such as unreliable results, insufficient documentation, or evidence of model drift. Regulators also could scrutinize how companies oversee these types of technology‑driven risks.

As a consequence, companies should be mindful that underwriters may inquire more into a corporation’s AI governance practices, sometimes referred to as its “AI maturity,” during upcoming renewal cycles.

D&O coverage for AI‑related claims

To date, almost all AI-related securities class actions involve allegations of “AI‑washing,” or the overstating of a company’s use or sophistication of AI. These claims typically fall within the scope of D&O coverage, subject to customary policy terms and conditions. Coverage challenges are most likely to arise relative to fines and penalties and for regulatory investigations into the company. We are not yet aware of existing securities litigation arising from the use of AI in the context of financial reporting or SOX compliance.

Insureds should; however, review whether standard exclusions, such as those for bodily injury, property damage, professional services, or privacy violations, could be triggered by AI-related allegations, raising the stakes for having exceptions for securities claims. In this regard, securities‑claim carve‑backs could help ensure that routine shareholder suits alleging investor losses remain covered as intended, even if the excluded exposure itself (e.g., bodily injury) is implicated.

Private companies should also consider how cyber exclusions might apply if allegations involve AI‑related data breaches, system manipulation, or misuse of personal data. Of note: public company D&O policies generally do not include cyber exclusions.

AI‑specific exclusions remain uncommon but have begun appearing in some private‑company policies.

Takeaways

While outcomes in AI‑related D&O securities claims to date have mostly been in line with traditional securities litigation (see our article, More buzz than sting: The state of AI-related securities litigation), deeper integration of AI into financial reporting may test that trend. In the meantime, it is essential to coordinate with company counsel on SOX compliance and with the company’s D&O insurance broker on coverage and policy wording considerations.

Disclaimer

WTW hopes you found the general information provided here informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author


John M. Orr, D&O Liability Product Leader, FINEX NA
D&O Liability Product Leader, FINEX NA
Email

Contact us