Blog

AI regulation

15 November 2019 | Luminance

With the unprecedented rate of development of artificial intelligence (AI), regulatory frameworks that govern the use of this technology often struggle to keep pace with the ways in which it is deployed to deal with real world problems. Governments around the world are increasingly developing guidelines and frameworks for maximising the opportunities while managing the risks that come with the use of AI across a range of industries.

Globally, we see a growing consensus that principles should guide the design and use of AI systems. In the UK, the Financial Conduct Authority (FCA) has taken a considered approach toward the question of regulation, emphasising that the focus should be on regulating the human and not the machine, developing guidelines instead of hard-and-fast rules, and ultimately creating public value. In Singapore, the Model AI Governance Framework released by the Personal Data Protection Committee (PDPC) set out two high-level guiding principles: decisions made by AI should be explainable, transparent and fair; and AI systems should be human-centric.

While these principles may be uncontroversial, it is widely acknowledged that there are practical challenges in implementing these principles into the AI sector, as well as the day-to-day workflows in which these technologies are used. When discussing explainability, for instance, there is often a trade-off between interpretability and completeness. An explanation for a decision made by an AI system that is more interpretable (i.e. simpler for the user to understand) may compromise on how complete the explanation is (i.e. how accurately the explanation describes the operation of the system).

Amidst the debate on how to reconcile the tensions inherent in the effort to regulate AI, we have to recognise that the primary concern AI governance attempts to address is the risk that arises when decision-making is ceded to the machine. This is the case with legacy software in the area of document review. Due to the limitations of older techniques in legal technology, the software by design forces the user to relinquish control to the machine. While supervised machine learning solutions allow the lawyer to take control of the review process by applying their legal knowledge and understanding to the findings generated by the technology, using supervised machine learning on its own is limiting in several ways. The lawyer is required to pre-define the scope of the report prior to starting the review as the technology will not surface anomalous or critical findings that the lawyer did not set out to look for in advance.

When using legacy software, the document review process typically looks like this: the machine has to be fed a large volume of training documents while in ‘training mode’ in order for it to learn to identify the clauses being reviewed, before being switched into ‘live mode’ where it is used to extract clauses from the actual document set being reviewed. As the software is not capable of learning from the interaction with the lawyer and applying that learning instantly, the lawyer is forced to rely on the capabilities of the software at that point in time to identify clauses within the document set and then review the extracted clauses in a spreadsheet. Like all sampling methods, this extraction-based approach is no longer fit for purpose by taking the lawyer out of the context of the document, and forcing them to relinquish control to the machine.

In contrast, at Luminance we operate on the philosophy that the technology does not replace the lawyer but rather enhances the lawyer’s capabilities. We built our Legal Inference Transformation Engine (LITE) to overcome the limitations posed by such legacy technologies. Our revolutionary technology is built on a unique blend of supervised and unsupervised machine learning as well as pattern recognition algorithms. When using Luminance, this is what the document review workflow looks like: the lawyer uploads the actual document set to the platform and gets an instant overview of the entire set of documents as the technology analyses the patterns of language and finds all the standards and deviations across the set without supervision. As the lawyer reviews the documents, he tags a concept and this understanding tracks through the entire data room instantly, allowing Luminance to identify every other example of that exact pattern of language as well as semantically similar patterns that may also be significant. The technology learns from these interactions to build its understanding of a new concept or refine a previously understood concept. This continuous learning is very much integrated into the actual review workflow.

What this means is that the platform identifies patterns and highlights areas of potential concern but leaves it to the lawyer to validate and draw conclusions. As a result, the lawyer remains in full control of the review process and never has to worry whether the machine might have missed something as is the case with clause extraction techniques.

As the debate about AI regulation rages on, we should never forget that the key value AI brings is the ability to enhance, and not replace, the decision-making capabilities of the human. Regulating the machine is out of the question because with true AI, the human is at the core of the legal process. For that reason, we must take responsibility by adopting platforms which never allow us to relinquish control. This approach continues to guide the design of Luminance and ensures that our users are able to complete document review exercises with the confidence that they have identified all key risks, without the risk of ceding control to the machine.