Generalist vs. Specialist AI: Discussing the Latest Developments in AI Live on Sky News

6 June 2023 | Jaeger Glucina

Last month, I had the pleasure of joining Ian King live on Sky News for a discussion about the latest developments in AI. With so much buzz around generative AI following chatbot launches from Microsoft, Google, and Open AI, it was a great opportunity to inject a dose of reality into the current hype and distinguish between generalist and specialist AI programmes, particularly within the context of the legal sector.

Specialist Tools for Specialist Industries

The recent influx of chatbots on the market without doubt signal an exciting juncture in the development of generative AI capabilities. For the first time in history, AI-produced content has become so sophisticated that it is, at times, indistinguishable from human-generated output. Nonetheless, there are some key limitations to the application of this technology, particularly for an industry like legal, where accuracy is of critical importance.

Most of the chatbots we read about at the moment are trained to be generalist programmes, having ingested data from the entire internet. In other words, they’re designed to provide an answer at all costs, without necessarily prioritising industry-specific knowledge. With this in mind, it’s clear to see why such technologies are not suitable in a legal context. Following a recent case in the US during which a New York lawyer cited non-existent case law in front of a judge, we can easily see the potentially disastrous repercussions of incorrect information generated by AI, not only for a lawyer’s reputation, but also for their firm, their client, and the court system.

Luminance’s latest product development, ‘Ask Lumi’, is the first chatbot to be underpinned by legal-grade AI. Unlike other GPT programmes, this AI has been exposed to over 150 million verified legal documents to develop and fine-tune the programme’s understanding of complex legal concepts. Moreover, legal-grade AI recognises when it doesn’t have enough information to answer a certain question and will flag this to the user for their review. In doing so, ‘Ask Lumi’ future-proofs organisations using the technology, enabling them to take advantage of the efficiency benefits that generative AI brings whilst eliminating the risk of so-called ‘hallucinations’ or AI-generated errors.

AI Won’t Replace Lawyers

Naturally, the conversation turned to whether these latest developments in AI will lead to an inferno of jobs in the legal sector. Put simply, the answer is no. Whilst AI can process and understand vast volumes of data, there remain uniquely human skills such as critical analysis, strategic thinking, and decision-making abilities which simply cannot be automated. In reality, we are not yet at a point where AI can truly replicate human thought and behaviour. Where this technology can be of use, however, is in the automation of repetitive, tedious tasks that plague our everyday workflows. Indeed, if AI can expedite 90% of routine work, humans are freed up to focus on the remaining 10%, utilising the skills they trained to acquire to complete higher value ‘human-required’ tasks.

In recent months, discourses around AI have certainly been shifting as more people begin to understand how it can be deployed in their everyday workflows. It is becoming increasingly clear that anyone who doesn’t adopt this technology will eventually be left behind as we enter what some are calling a new age of innovation. Indeed, when used correctly, AI can act as a silver bullet for clinching that all-important deal, finding the smoking gun in a litigation, or identifying risks after unexpected global events, the sudden introduction of economic sanctions being one that springs to mind.

So, whilst we don’t expect to see robot lawyers anytime soon, AI can be extremely beneficial for automating manual processes, better placing legal professionals to harness their uniquely human skills, providing critical analysis and quality counsel to their clients.