Skip to main content
main content, press tab to continue
Article

The silent partner: AI’s growing role in law firms

By Genevieve Mathews , James Earwaker and Ben Di Marco | April 24, 2024

What are the risks for law firms using AI and large language model AI systems?
Cyber Risk Management|Financial, Executive and Professional Risks (FINEX)
N/A

The dawn of generative artificial intelligence (AI) and large language model AI systems being used within law firms is here. AI system based technologies will not just be an aid but a game-changer in the way legal services will be delivered and consumed. The technology also ushers in new risks and exposures that must be managed. In the provision of professional services, law firms may find themselves facing allegations of breach or negligence and for which they may need to turn to their insurance programmes.

AI systems are transforming traditional legal practices, automating routine tasks, and are poised to create incredible efficiencies in the way firms operate. Despite the benefits, there are risks associated with the use of AI in legal practice such as accuracy of information, hallucinations, confidentiality, bias, passing off, and accountability. WTW has observed law firms embracing the latest technology on the one hand, but also treading cautiously due to the potential professional risks and reputational damage associated with the use or misuse of AI on the other.

Other “partners”

Modern AI systems are often delivered via a web of supply chain partners, ranging from the underlying LLM technology providers, middleware intermediaries, data processors and providers of connected services. Any claim arising from the use or misuse of AI may involve multiple parties, including the law firm, data provider, designer, manufacturer, programmer, developer, as well as third party suppliers in the AI provider’s supply chain. Many of these providers insist on broad hold harmless language, meaning the relevant contracts including any indemnities must be carefully considered. 

Management of the corpus, which is the collection of data that has been used to train the AI system, is a key exposure. Prior to undertaking an AI project law firms should carefully examine how corpus data will be collected, accessed and shared, and what steps third parties will take to protect data which is within their care, custody or control. Firms will also want to ensure that third party providers have adequate insurance to meet any potential claim, including the breadth of such insurance and the application of exclusions such as contractual liability.

We are not aware of any claims against law firms involving the firm’s use of AI and while we cannot know how the Courts will view a law firm’s liability, it is possible that an omission to review or oversee, or inadequate review or oversight, to ensure accuracy of work product or services that have relied on AI may lead to liability on the part of the law firm. 

Managing AI Risk

There are many additional risks that AI introduces to law firms, especially when it is relied upon without proper oversight. Many commentators have already highlighted that submitting material to the Court that is incorrect due to AI output may result in failure to uphold a practitioner’s duties to the Court.

Globally, law societies and bar associations are also moving towards ethical frameworks which highlight the importance of AI system outputs being closely monitored and reviewed, similar to supervising a junior lawyer’s work. Agreements with AI provider’s should be subject to auditing and review processes, so that a firm can be satisfied that the model will perform to the degree of accuracy expected by the law firm.

Internal governance processes are also becoming increasingly important. These can involve establishing a supervisory committee that will be in charge of designating which AI tools can be used in the firm, which cases these various tools are appropriate for, and the user training provided to help employees utilise the tools effectively

Regulating AI – EU, Australia and New Zealand

On 2 February 2024, the European Union (EU) became the first region globally to approve legislation governing AI[1]. The Act contains dedicated rules for general-purpose AI systems and these rules will come into effect in 2025, while the Act will come into force in 2026. The AI Act has extra-territorial application and will apply to any business that offers AI systems or services within the EU.

Australia

Australia, alongside EU and 27 countries including the US, UK, and China, have signed the Bletchley Declaration[2] signalling Australia’s commitment to work with the international community to ensure AI is developed with the right guardrails in place.

The Australian Government has issued the 8 AI Ethics Principles, a voluntary principle-based framework for ethical AI practices. However, we are yet to see AI-specific legislation. In January 2024 and following consultation, the Australian Government released its interim response, acknowledging concerns that existing laws do not adequately prevent AI-facilitated harms before they occur and steps are being taken to strengthen existing laws.  Other considerations include possible mandatory safeguards for those who develop or deploy AI systems and possible specific legislative vehicles.

New Zealand

New Zealand was not a signatory to the Bletchley Declaration, however, the previous government did not close the door on it and raised the possibility of signing in the future.

Rather than relying solely on legislation, the New Zealand government is prioritising adaptive regulations that govern the ethical deployment of AI tools. For example, the Privacy Commissioner recently released new guidance on the use of AI tools and how users can ensure that they comply with their obligations under the Privacy Act 2020.

In 2017 the Artificial Intelligence Forum New Zealand was formed, which is a not-for-profit membership-based association formed of 145 NZ based companies across 7 industry sectors including the public sector, AI and technology, professional services, start-ups/small-medium-enterprises, and research and Education. This collaborative community promotes and supports opportunities raised by AI whilst working to ensure that society can adapt to the changes AI is likely to bring.

WTW will continue to monitor the AI landscape particularly any regulatory developments as this may influence insurers’ perception of a law firm’s PI risk.

What do insurers think of AI?

Professional Indemnity (PI) insurers recognise the benefits of AI but are currently monitoring the AI landscape including the potential risks. The impact of AI on PI coverage is currently not known.

Professional Indemnity (PI) insurers recognise the benefits of AI but are currently monitoring the AI landscape including the potential risks.”

Genevieve Mathews | Account Manager, FINEX

Insurers are increasingly focussed on this evolving risk with targeted questions asked of law firms during insurance renewal discussions such as the implementation of policies and procedures governing the use of AI and contractual arrangements with AI system providers. One PI insurer has affirmatively stated that they consider the utilisation of AI as falling within the breadth of professional services however we are yet to see other PI insurers adopt a similar stance.

Insurers are increasingly focussed on this evolving risk with targeted questions asked of law firms during insurance renewal discussions such as the implementation of policies and procedures governing the use of AI and contractual arrangements with AI system providers. One PI insurer has affirmatively stated that they consider the utilisation of AI as falling within the breadth of professional services however we are yet to see other PI insurers adopt a similar stance.

At present, some PI policies contain computer-related exclusions, which may have been drafted to target cyber risk exposures. Depending on the breadth of that exclusion, AI risks may inadvertently fall within the scope of its application. 

As this risk evolves and if claims arise, we expect insurers to take a position on affirmative inclusion or exclusion of AI-related coverage for law firms. 

Watch this space - The road ahead for AI

There are unquestionable benefits to AI, however, it is crucial for law firms to understand and mitigate their associated risks, including taking steps to ensure that the standards expected in the legal profession are not compromised through its use.  Any misuse or inappropriate use may expose a law firm to an action in negligence.

WTW is committed to supporting our clients in navigating emerging challenges and continue to provide our clients with deep insights across regulatory developments, exposure analysis, insurance needs, placement and claims advocacy support.

Speak to our experienced insurance brokers who can assist further and help you navigate this evolving risk.

Footnotes

  1. Artificial Intelligence Act 2024; see EU AI Act one step closer, Return to article undo
  2. Australia signs the Bletchley Declaration at AI Safety Summit, Return to article undo
Authors

Account Manager, FINEX

Manager - Wellington & Central, FINEX

Cyber and Technology Risk Specialist - ANZ, FINEX

Contact us