Blog

Why the 'Word of the Year' highlights the need for specialised AI

30 November 2023 | Luminance

Cambridge Dictionary recently declared ‘hallucinate’ as the ‘Word of the Year’ for 2023, after the term took on a brand-new meaning following several well-publicised instances of generative AI tools producing false information.

This accolade reflects the huge surge in interest in AI this year, showing how the technology has captured public imagination and rapidly become part of everyday conversation. However, as the ‘Word of the Year’ headline demonstrates, certain AI models do not come without their risks. AI hallucinations can have serious repercussions, especially in spheres where accuracy is crucial.

The Fictitious Claims of ChatGPT

Let’s first take a look at why these hallucinations are happening. In short, generative AI solutions like ChatGPT are generalists. They have ingested vast amounts of data from across the entire internet, meaning the system has been exposed to all types of subject matter. What’s more, the goal of generative AI lies in the name—to always generate a human-like or, rather, plausible-sounding output. So, whilst generalist AI models can be great for coming up with a catchy advertising jingle or your next travel itinerary, they can’t always be relied upon to provide the right answer. And the need for the right answer is non-negotiable in the legal world, where accuracy can determine entire cases.

The risks of AI ‘hallucinations’ in a legal context have certainly proved their disastrous potential. Earlier this year, a lawyer in New York was fined for citing fake cases generated by ChatGPT, putting not only his reputation, but also the firm and client at risk. Meanwhile, a chatbot falsely accused a US law professor of sexually harassing students, based on a non-existent newspaper article. The takeaway is clear: professionals need to be extremely discerning in how they use generalist AI in a legal environment.

Why Specialist Industries Need Specialist AI

As Wendalyn Nichols, Cambridge Dictionary’s publishing manager, explains, AI tools using Large Language Models (LLMs) “can only be as reliable as their training data”. And that’s why legally-trained AIs are necessary for legal-specific tasks. In this sense, generalist AI models like ChatGPT can be compared to a well-read friend, capable of spouting believable and impressing-sounding information at a dinner party, but entirely incapable of matching up to a highly trained lawyer. This is the real difference between generalist and specialist AI.

So, how is specialised AI achieved? At Luminance, we use generative AI and analytical AI. Our legal Large Language Model (LLM) has been exposed to over 150 million verified legal documents. This means that Luminance has been legally trained, having developed a close understanding of the legal domain and been finetuned for legal applications over a number of years in the market. Most importantly, Luminance is sophisticated enough to recognise when it doesn’t know the answer to a certain question and will hand the work back to the lawyer. This means that lawyers can benefit from the efficiency gains AI has to offer, without the fear of relying upon hallucinated information.

Ultimately, the spotlight on the newfound meaning of ‘hallucination’ acts as a stern reminder of the need for caution when applying generalist AI models to scenarios where accuracy is paramount. The latest AI applications undoubtedly offer exciting possibilities, but also highlight the need for specialised AI which can be trusted.