Across the legal sector, we are witnessing a marked increase in claimants using AI to bolster PI claims. AI-powered tools, many of which are free or low-cost, are enabling disgruntled clients to produce longer, more sophisticated letters of complaint and Letters of Claim under the Pre-Action Protocol for Professional Negligence. This shift has several consequences:
01
These AI-generated letters are often far more detailed than those drafted without technological assistance, which inevitably lengthens the time required for law firms to review and respond.
02
Some claimants who might once have limited their grievance to a simple complaint are now formulating allegations into a formal Letter of Claim. That triggers the need for a formal Letter of Response - driving up the number of instructions to panel firms and, in turn, increasing defence-cost spend shown on insureds’ claims summaries.
Our own specialists at Willis can work with law firms to manage these exposures and help keep defence costs under control.
It is worth noting, however, that not every AI-crafted argument stands up to scrutiny. We frequently see letters that look legally sophisticated at first glance but, on closer inspection, contain unsound legal reasoning, misquoted legislation or entirely fictitious case citations. Lay claimants rarely verify the legal accuracy of their AI-produced correspondence. While these weaknesses may ultimately assist in defending a claim, they nevertheless increase the workload for firms.
Feedback from our legal-sector clients suggests a cautious and measured approach to the use of AI.
This careful stance reflects a recognition that while AI can improve efficiency, the risk of inaccurate or “hallucinated” outputs remains significant, particularly if AI is used to draft formal legal documents or conduct legal research.
So far, we are seeing very few notifications to insurers that relate directly to a law firm’s own use, or misuse, of AI, for example in drafting legal documents. For the moment, the focus is on PI notifications that relate to the use of “fake” cases and the implications for the legal profession:
Not surprisingly, PI underwriters are watching developments closely and generally favour firms that adopt a carefully considered, risk-aware strategy.
Their concerns are clear:
The challenge for underwriters is that they typically use the past to predict what will happen in the future and, as AI is evolving, the risks and opportunities are unknown. Against this backdrop, underwriters generally approach AI in the same way as they would any change in working practice or new work stream. They consider what tools the firms are using, how they are using them and why. It then comes down to simple risk management – what are the internal policies in place, does everyone know when they can use AI and for which clients. Supervision policies should be updated to reflect the new risk.