Word of the Year: Hallucination

Hallucinate is the Cambridge Dictionary word of 2023. This reflects not only how AI has entered popular consciousness this year, but how one of its biggest challenges is its ability to make stuff up.

It's an especially valid concern for the use of AI in the legal sector where there can be absolutely no room for made up facts. But as with any powerful technology, understanding the risks, applying the right safeguards and oversight can unlock the benefits while avoiding the pitfalls.

The example of a US lawyer using ChatGPT for legal research went viral earlier this year after his filing was found to contain made up cases. The judge in the case demanded an explanation in writing that, “six of the submitted cases appear to be bogus judicial decisions with bogus internal citations.” In a first, a US federal appeal court recently proposed that lawyers certify that they either did not use AI tools in drafting their submissions, or that humans reviewed the accuracy of any text generated by AI in their briefs.

The fact is that Large Language Models (LLMs) (which power most generative AI tools) have largely been trained on all of the content that is available on the internet. But in the case of the legal industry, much of the domain specific knowledge like WestLaw, Practical Law, contracts etc are not available on the public internet. So when, for example, a generalised LLM is tasked with writing a legal argument and has insufficient case law to pull from, it may start fabricating supporting evidence to continue building its case.

The consequences of this in the legal realm could be dire and it should be front of mind for anyone thinking about using generative AI for legal work. The fact that some companies in this space claim their products are “free from hallucination” is troubling and may well result in complacency around what is a very real issue. Being upfront and transparent about the risks of hallucination and then taking the necessary steps to minimise them is crucial.

Firstly, how you use AI tools for legal work matters a great deal and there are clearly use cases that present lower risk from hallucinations. AI-powered document drafting and review tools for example can significantly reduce the time spent on routine legal work, such as contracts. AI tools can also be great for quickly reading and analysing large numbers of documents. Using AI to draft court filings or motions or even to seek legal advice presents a far higher risk.

The type of AI tool used matters too. A general prompt made on a general AI chat bot like ChatGPT will be far more prone to hallucinations than a specialised legal AI tool that has been trained to understand legal documents. Good prompt engineering built into the tool is critical to getting this right. A well designed legal AI prompt will provide context and constraints to avoid randomness as well as directives on sourcing and logic.

Ultimately, the key to mitigating the risks of hallucinations is human, not technical. These tools should not be used without the right oversight. While powerful, AI is not (yet at any rate) a replacement for human qualities such as judgement. They can help to automate repetitive tasks but the output of these tools should still be checked and not simply treated as a finished product. Lawyers should be transparent in the use of AI in their work, clearly citing AI-generated content. As usage of this technology becomes more widespread, clients will welcome the efficiency and cost savings that these tools can achieve.

Hallucinations may not capture the imagination as much as some other threats from AI, like deepfakes or Terminator-esque scenarios, but they will undoubtedly undermine the application of this technology in areas such as the legal industry. More cases like the US one I mentioned earlier will not only dent confidence in this technology, but also potentially restrict how it is used. Which would be a shame as the benefits of AI are very clear and tangible. So we should not sweep this issue under the carpet or pretend it has been solved. Companies must be open and transparent that hallucinations are a genuine challenge but one that can be mitigated with the right tools used in the right ways with the right oversight.