Transforming the Legal Landscape? The Impact of LLMs


Large Language Models (LLMs) are a branch of artificial intelligence (AI) that can generate human-like text based on deep learning techniques. LLMs are trained on massive amounts of textual data, such as books, articles, and websites, and learn to recognise patterns, structures, and relationships within the data. By doing so, they develop the ability to produce text on various topics. 

The presence of LLMs in the legal sector is increasing and, with this, the ability of lawyers to understand and use LLMs in providing quality advice to their clients is becoming more important. CMS have a dedicated innovation team that specialise in the latest legal technology.

This article was partially drafted using LLMs. The author experimented using various prompts across multiple LLMs, including to create a first draft and complete a near-final draft. It is important to note that user involvement was required at each stage, including to review the content produced by the LLMs. The afterword to this article includes more details on this process.

How are LLMs impacting the review and analysis of legal documents? 

One of the areas where LLMs are having an impact is in the review and analysis of legal documents. Lawyers often have to deal with lengthy and complex contracts, agreements, and other legal documents, which can be time-consuming. LLMs can assist lawyers in this process by automatically extracting and summarising key information, identifying relevant clauses, and flagging (and even explaining) potential issues. This can help streamline the document review process and enable lawyers to focus on more complex tasks, such as legal analysis, strategy and negotiation. 

How are LLMs affecting legal research and drafting? 

LLMs can also provide assistance with legal research, however this is an area where they must be used with significant caution. LLMs can analyse and summarise complex legal texts, such as case reports and statutes, which can enhance research efficiency. However, a danger of LLMs is that they often do not have sufficient legal-related training data and may generate inaccurate or misleading texts which are commonly referred to as ‘hallucinations’. These hallucinations may contain minor nuances that are incorrect, or the entire response could be false – yet they are written just as convincingly as a real answer. For example, there have been cases of LLMs fabricating case law when lawyers have used them to draft pleadings. Therefore, LLMs should not be relied on to draft research or documents that require specific legal input and expertise, and lawyers should always verify the accuracy of information produced by LLMs. 

What are the ethical and regulatory implications of LLMs? 

The rise of LLMs raises questions around important ethical and regulatory considerations. LLMs are trained on vast amounts of existing data, which may itself contain inherent biases. If these biases are not properly addressed and mitigated, LLMs can perpetuate existing disparities and inequalities. This underscores the importance of checking the output of LLMs for accuracy and fairness. There are also intellectual property concerns around copyright infringement in training data and ownership of LLM output. Courts around the world are starting to consider these issues and governments are beginning to regulate, however it remains to be seen how this area will develop.

Key Takeaways

Overall, the use of LLMs must be carefully monitored to ensure not only accuracy but fairness and accountability. It is important to note that whilst LLMs are not replacing human lawyers, it is likely they will significantly change the lawyer’s role over the coming years. While LLMs can expedite certain tasks, they cannot replace the critical thinking, creativity, and intuition of lawyers. Therefore, as LLMs become more widely used in the legal profession, it is critical that lawyers stay informed, adapt to new technologies, and leverage the power of LLMs to enhance efficiency, while understanding and accounting for the clear limitations of this technology by thoroughly reviewing its output.


As an experiment, this article was partially drafted using LLMs. The author asked for various responses in their prompts, including the positive impacts and negative impacts of LLMs on lawyers, and regenerated responses to view alternatives. Whilst the author requested that the word count be kept below 200 words, this was often not picked up in the first response and had to be requested in a further prompt. The author reviewed the responses, selected the most appropriate, and collated the responses into an article. The author then proofread the article as a whole, adapted the wording where required and made some additions based on their own research on LLMs. Once a near final version of the article was created the author then inserted this back into the same LLM and prompted it to improve the tone and grammar for a professional article. Whilst some changes to wording were made in this response the changes were slight (although some suggested changes were incorporated). The author then inserted the updated article into a different LLM, which was slightly more successful at capturing the desired tone, however edits were still made before finalising. As a last step the author inserted the finalised article into the original LLM and requested it suggest a title. Multiple suggestions were provided, and the author collated a few of these suggestions to create the title used in this article. The above process not only demonstrates the usefulness of LLMs in creating content but also the requirement for experimenting with and improving on prompts, checking content produced by LLMs, and the advantages of having access to a range of LLMs.