Skip to main content
main content, press tab to continue

The rise of artificial intelligence in the legal sector

By Dr. Joanne Cracknell | January 15, 2024

Continuing on a series of articles on Artificial Intelligence (AI), our Professional Indemnity Legal Services team provide a run through of AI’s ever-growing involvement within the Legal Sector.
Financial, Executive and Professional Risks (FINEX)

The presence of Artificial Intelligence (AI) in law firms is increasing as the legal sector starts to embrace this technological advancement to enhance the provision of legal services and gain a competitive edge.

The Solicitors Regulation Authority’s (SRA) latest risk outlook report[1] explores the use of AI within the legal sector, which is rising across law firms of all sizes, with over 50% of lawyers using some form of AI[2].

The key concern around AI is how to use it safely and effectively. As with any type of risk it is important that law firms are able to identify, understand and manage the consequences to their business. There is some uncertainty around the use of AI systems whether it be identifying the most appropriate system to use, ensuring compliance with regulatory and legislative obligations, or broader concerns about AI’s association with criminal activity. It is this uncertainty that may be holding some firms back from adopting AI in their businesses.

AI usage in law firms

AI is arguably the most important technological development in recent years, but it is not a new phenomenon and has been used by law firms for many years. In 2018 law firms were using AI tools for e-disclosure, legal research, digital dictation, automating practice management systems and chatbot style tools to offer basic assistance for clients[3]. Research undertaken by the University of Oxford[4] which explored the use of AI within the legal profession was used to publish a white paper. The figure below shows how AI assisted technology was being used by the profession at that time.

This table shows the use of AI-assisted technology, by organisation type within the legal sector by percentage.
Figure one: use of AI-assisted technology, by organisation type

*'Grand total' includes all complete responses, including respondents from ABS and legal technology solutions providers
Source: Sako, Armour, and Parnham (2020)

Today, the most common use of AI in the legal sector is to automate routine risk and compliance tasks, such as onboarding of clients and conducting anti-money laundering checks. It is also being used to carry out administrative tasks such as engaging with clients via chatbots which can expedite response times to client enquiries. Other areas where AI is commonly used is text generation for document drafting analysing contracts, and predicting the outcome of a client’s case - which can prove helpful when assessing the risks of litigation and conducting costs benefit analysis.

AI opportunities

One of the benefits from adopting AI is the increase of the accessibility of legal services, allowing law firms to provide legal services to a wider reach whilst doing so more affordably, effectively and efficiently. AI can help firms deal with administrative tasks more efficiently, freeing up valuable time of fee earners and enabling them to apply their technical expertise to more complex and challenging aspects of a client’s matter.

Automating administrative tasks can also reduce costs which benefit both the law firm and their clients. Using AI case prediction can help law firms demonstrate to clients the pros and cons of their matters which may assist with resolving disputes more swiftly, again reducing costs for all parties concerned.

Once law firms and their clients become more comfortable and confident with AI systems it is expected that the use of AI will increase as the practical and commercial benefits from the adoption of this tool are obvious.

AI risks and challenges

The SRA’s Chief Executive, Paul Philips, recognises the opportunities AI can offer but also acknowledges the risks and challenges that it can bring and stated that:

The key risks facing the profession arise from concerns about:

  • Accountability: law firms are accountable to their clients for the legal services provided regardless of whether AI systems have been used in the provision of those services.
  • Accuracy: like humans, computers can make mistakes and whilst AI can be useful for carrying out legal research, contract comparison, and answer client queries, the output may not always be accurate. The use of AI does not remove the need for supervision and checking the quality and/or accuracy of the work produced.
  • Bias: AI systems can contain biases by following incorrect patterns from the data. Such bias, which are often subtle and deeply embedded, can result in unfair or incorrect outcomes that may lead to miscarriages of justice, or inadvertent discrimination, which may diminish the trust of AI systems and more importantly erode the trust the public maintains in the provision of legal services.
  • Client confidentiality: law firms handle large amounts of sensitive confidential information. Therefore, it is crucial that all client information is protected both internally and externally when using AI systems. Law firms must ensure that the use of any AI systems complies with the requisite data protection legislation, SRA Codes of Conduct and adequate security measures must be implemented.

How can law firms manage AI risks?

AI must operate effectively, accurately, and within the law. The UK Government has established a national strategy for managing AI and a framework to guide and inform the responsible development AI which is underpinned by the following five principles[6]:

  1. 01

    Principle 1: Safety, security, and robustness

    Select systems that meet the needs of the firm and ensure that they are tested prior to deployment. Staff must be trained on the system and are aware of what is deemed acceptable use. In addition, clear rules around the use of systems such as ChatGPT is needed including supervision and guidance regarding the use of confidential information and keeping it safe.

  2. 02

    Principle 2: Appropriate transparency and explainability

    The design and running of chosen systems need to be documented so that law firms can explain how they are used. Clients need to be informed when AI will be used to handle their matter, how it works, and the impact it has on them.

  3. 03

    Principle 3: Fairness

    Any personal data processed using AI systems must be handled in accordance with the data protection legislation[7] including the use of personal data to train, test or deploy an AI system. Ensure client confidentiality is protected at all times, particularly when training a system. Any data output must be monitored to ensure that outcomes are not biased or inaccurate. Be aware that bias can develop as systems evolve.

  4. 04

    Principle 4: Accountability and governance

    AI does not currently have the concept of truth and therefore systems and staff using them need to be supervised to ensure that they are working appropriately, and the accuracy of the information produced is checked and verified to limit the risk of information being hallucinated. All law firms are responsible for their activities and accountability cannot be delegated to the IT department or an outsourced external IT provider.

  5. 05

    Principle 5: Contestability and redress

    Ensure that there is a mechanism for clients to contest any decision made by AI involving the use of personal data that they disagree with. This may form part of a law firm’s complaints handling procedure. Make sure that this procedure is updated to be able to deal with any questions raised by clients about the use of AI with their transactions.


The aim of AI is to offer greater efficiency, automation and autonomy for law firms and their clients. Regardless of how we view AI, it is very much present in our everyday lives, and it is here to stay. Law firms have become more accepting of this and the use of AI systems in the provision of legal services is increasing. As long as the profession continues to understand and monitor the risks to their business, law firms and their clients should stand to benefit from the use of AI.


  1. Risk Outlook report: The use of artificial intelligence in the legal market. Return to article
  2. New research finds that AI is improving the way the legal sector operates. Return to article
  3. Six ways the legal sector is using AI right now. Return to article
  4. Parnham, R., Sako, M. and Armour, J. (2021). AI-assisted lawtech: its impact on law firms. Oxford: University of Oxford. December 2021. Return to article
  5. Report looks at pros and cons of AI in law firms. Return to article
  6. A pro-innovation approach to AI regulation. Return to article
  7. Explaining decisions made with AI. Return to article

Director - PI FINEX Legal Services

Contact us