Skip to main content
main content, press tab to continue
Article

Artificial Intelligence - Claims against law firms

By Lindsay Woolrych | December 3, 2025

How Generative Artificial Intelligence (AI) is playing a role in professional indemnity (PI) claims against law firms
Financial, Executive and Professional Risks (FINEX)
N/A

Across the legal sector, we are witnessing a marked increase in claimants using AI to bolster PI claims. AI-powered tools, many of which are free or low-cost, are enabling disgruntled clients to produce longer, more sophisticated letters of complaint and Letters of Claim under the Pre-Action Protocol for Professional Negligence. This shift has several consequences:

  1. 01

    Greater time pressure on law firms

    These AI-generated letters are often far more detailed than those drafted without technological assistance, which inevitably lengthens the time required for law firms to review and respond.

  2. 02

    Escalation from complaint to claim

    Some claimants who might once have limited their grievance to a simple complaint are now formulating allegations into a formal Letter of Claim. That triggers the need for a formal Letter of Response - driving up the number of instructions to panel firms and, in turn, increasing defence-cost spend shown on insureds’ claims summaries.

    Our own specialists at Willis can work with law firms to manage these exposures and help keep defence costs under control.

    It is worth noting, however, that not every AI-crafted argument stands up to scrutiny. We frequently see letters that look legally sophisticated at first glance but, on closer inspection, contain unsound legal reasoning, misquoted legislation or entirely fictitious case citations. Lay claimants rarely verify the legal accuracy of their AI-produced correspondence. While these weaknesses may ultimately assist in defending a claim, they nevertheless increase the workload for firms.

How law firms use AI to conduct their work and market trends in PI notifications

Feedback from our legal-sector clients suggests a cautious and measured approach to the use of AI.

  1. Many firms limit AI tools to non-fee-earning tasks, such as internal file notes or automatically generated summaries of Teams meetings.
  2. AI is also finding a place in drafting thought-leadership articles or marketing content, where the legal exposure is low.
  3. Where AI is used, firms typically insist on rigorous human review before anything is shared externally or relied upon in client work. Internal policies of law firms are being amended to specify that the user of the AI should flag when AI has been used so that a greater level of scrutiny can be applied to the review of the output.
  4. In terms of client transparency, some firms are amending their Terms of Business to reflect their use of AI on client matters and, in some instances, client consent may be required.

This careful stance reflects a recognition that while AI can improve efficiency, the risk of inaccurate or “hallucinated” outputs remains significant, particularly if AI is used to draft formal legal documents or conduct legal research.

So far, we are seeing very few notifications to insurers that relate directly to a law firm’s own use, or misuse, of AI, for example in drafting legal documents. For the moment, the focus is on PI notifications that relate to the use of “fake” cases and the implications for the legal profession:

  • The case of Harber v HMRC (2023) is one of the earlier cases which highlighted the risks of relying uncritically on AI outputs. This case involved a taxpayer who appealed a Capital Gains Tax penalty by citing nine fake cases generated by AI. The tribunal found the taxpayer’s reasonable excuse plea invalid because the supporting legal precedents were fake. This case highlights the risks of relying on AI for legal research without proper verification, as AI can “hallucinate” fake cases.
  • There have been other cases since Harber v HMRC in which the Courts have provided warnings on the use of fake cases and referrals can be made to professional bodies – the Bar Standards Board and the Solicitors Regulatory Authority. Professional misconduct issues were raised in the 2025 cases of Ayinde v London Borough of Haringey and MS (Bangladesh) v Secretary of State for the Home Department.

How do PI underwriters view the use of AI by law firms?

Not surprisingly, PI underwriters are watching developments closely and generally favour firms that adopt a carefully considered, risk-aware strategy.

Their concerns are clear:

  1. Heavy reliance on AI for legal research or drafting legal documents could lead to an uptick in negligence claims if inaccurate or fabricated material slips through.
  2. An increase in such claims could drive losses and increase premiums across the market.

The challenge for underwriters is that they typically use the past to predict what will happen in the future and, as AI is evolving, the risks and opportunities are unknown. Against this backdrop, underwriters generally approach AI in the same way as they would any change in working practice or new work stream. They consider what tools the firms are using, how they are using them and why. It then comes down to simple risk management – what are the internal policies in place, does everyone know when they can use AI and for which clients. Supervision policies should be updated to reflect the new risk.

Key takeaways

  • AI is already changing the landscape for PI claims - primarily by empowering claimants.
  • For law firms and their insurers, early recognition and proactive management of this trend are essential.
  • Those who combine a measured adoption of AI internally with clear controls and processes will be best placed to protect both their clients and their PI programmes.

Author


Associate Director, Claims Advocate, FINEX Global

Contact us