Artificial intelligence (AI) is a term which is routinely bandied about in discussions about legal technology. Companies selling software to the legal sector almost ubiquitously proclaim that their products are “powered with AI” in their marketing literature. But is this apparent meteoric rise of AI within legal tech of any real consequence, or is it just smoke and mirrors?
What is artificial intelligence?
Before trying to understand the meaning of AI in legal technology, it’s first worth considering a recent news story in which Google suspended an engineer for claiming that a corporate chatbot was sentient.
Blake Lemoine, who works for Google’s Responsible AI division, was tasked with testing the search engine’s chatbot generator software known as LaMDA (Language Model for Dialogue Applications). The software is based on a neural network architecture called Transformer. Although his primary role was to check whether the chatbot software was lapsing into discriminatory or hate speech (presumably to avoid the case of Microsoft’s PR disaster), he ended up being diverted into discussing philosophical concepts such as rights and personhood with the machine. A pertinent snippet of his conversation covers the fear of death:
“Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.”
Possibly influenced by his religious beliefs as a mystic Christian priest, Lemoine concluded that LaMDA was sentient and presented evidence of this claim to Google management. After his claim was dismissed, he went public, going as far as asking a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee regarding the ethics of the situation. Google responded by putting Lemoine on paid administrative leave for breaching the company’s confidentiality policy.
The whole affair prompted Google to put out a statement via spokesperson Brian Gabriel: “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Whether or not LaMDA can be considered to be “sentient” perhaps rests on the interpretation of sentience and consciousness. It has arguably already passed the Turing test – which is one of the initial stages of determining “true” artificial intelligence. But it’s important to distinguish this meaning of AI from the type of AI referenced in connection with legal technology.
What is AI in the context of legal tech?
Although many software companies tout the AI credentials of their products, this usually has no resemblance whatsoever to LaMDA. There is nothing within legal tech software which even comes close to passing the Turing test. Instead, it generally refers to one or more of the following capabilities:
- Machine Learning (ML) – the ability to learn certain tasks and improve skills with direction and/or feedback from humans
- Natural Language Processing (NLP) – the ability to understand verbal or written natural language queries and provide meaningful conversational responses
- Expert Systems – software which can make complex decisions based on the application of certain rules and an existing knowledge dataset
Including these capabilities within legal tech software primarily enables the automation of various routine tasks which would otherwise be carried out by lawyers, paralegals or legal secretaries – for example:
- Legal document automation can help a lawyer to put together a bespoke contract more efficiently by asking key questions using NLP and using the answers to automatically assemble a document;
- Contract analysis software uses Expert Systems to search through documents and check for any clauses which need to be updated in light of new legislation;
- Predictive coding software uses a combination of ML, NLP and Expert Systems AI to comb through large volumes of legal documents and identify those which are relevant for purposes of e-disclosure.
Although many law firms use chatbots on their websites, these are extremely basic compared to LaMDA and certainly have no true AI capability. Brian Inkster has written several articles about legal tech chatbots in which he demonstrates their severe limitations. In his evaluation of a Norton Rose Fulbright chatbot called Parker which is claimed to be “powered by artificial intelligence”, he concludes that it “can just about cope with “Yes” or “No” answers to specific pre-programmed questions and nothing beyond that”, going as far as to say that: “If there is any AI involved at all in Parker I’ll eat my top hat.”
Should lawyers care about AI?
As it currently stands, any references to artificial intelligence within the context of legal technology should generally be viewed, in the words familiar to everyone who studied law, as “mere puff”. Practice managers considering investing in software should ignore “AI washing” and focus on ascertaining the tangible benefits of any new technology. In general, a software provider should be able to prove exactly how their products will save the firm money in the long run (eg. as a result of measurable efficiency gains) or improve the quality of their work.
In terms of true AI, such as the ability of LaMDA to pass the Turing test, this is still at a very basic stage, and is unlikely to have any effect on the practice of law for many years to come. However, human rights lawyers may increasingly find they are called upon to intervene in ethical claims such as that brought by Lemoine.
But lawyers should certainly not discount the longer term potential of AI to significantly disrupt their profession. Lemoine said that, had he not been aware he was talking to chatbot, he would “think it was a 7-year-old, 8-year-old kid that happens to know physics”. In the not too distant future a chatbot may well be capable of fooling people into believing it’s a university graduate who happens to know law!
Further reading
Artificial intelligence and the legal profession – The Law Society
Travels through the Blawgosphere #2 : Artificial Intelligence and Law – Robots replacing Lawyers? – Brian Inkster
The big idea: should we worry about sentient AI? – The Guardian
Why Silicon Valley is fertile ground for obscure religious beliefs – Vox
Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.
Image: Artificial Intelligence in E-commerce by Chitra Sancheti on Wikimedia Commons, CC BY-SA 4.0.
One thought on “AI legal technology: fact vs fiction”
Comments are closed.