Skip to main content

The Potential Influence of AI in Court Proceedings

On November 30, 2022, OpenAI launched a chatbot called ChatGPT and since its release, it has taken the internet by storm. The natural language processing tool driven by AI technology allows an individual to have a human-like conversation with the chatbot, with the language model also demonstrating the ability to answer questions and assist with tasks such as composing emails and essays. 

For many years now there has been an increase in digitalisation, with the move towards more efficient and technology-friendly work being incorporated into businesses and personal life. However with this shift towards a more digital workspace comes risks and unprecedented influence. With the launch of AI driven technology such as ChatGPT, and its vastly documented use, the potential influence of AI in court proceedings is the topic of debate right now. 

There have been a number of reports of the use of ChatGPT within legal proceedings that call into question the future role of such programs. In a civil case heard in Manchester in May, a litigant in person (referring to an individual without legal representation from a solicitor or barrister) presented four case citations to the court in support of their argument after asking ChatGPT to produce such sources. After close examination by the barrister present on the opposing side, it was identified that one case name had been entirely fabricated, with the other three being real case names but with the relevant cited passages being completely different to the judgements in each. It was revealed that for all four citations, the paragraphs quoted by ChatGPT, and subsequently the litigant in person, were entirely fictitious although they appeared to be legitimate at first glance. This case highlights the dangers of unrepresented individuals using AI generated responses within legal proceedings, with the potential for false information to be provided to the courts. ChatGPT as a tool itself scours text across the internet to generate informed responses, however it has also been shown to provide different answers to the same questions and fabricate information on occasion in the course of doing so. Therefore, its credibility as a source is undermined and this represents a unprecedented problem for the potential use within the legal system. 

Furthermore, there have been reports from Colombia and India this year that judges have used ChatGPT to help make rulings on cases. For example, a judge (named Juan Manuel Padilla) asked the tool questions, alongside precedents from previous rulings, when deciding whether an autistic child's insurance should cover all the costs of his medical treatment. The legal documents show Padilla asked ChatGPT the precise legal matter at hand: "Is an autistic minor exonerated from paying fees for their therapies?" to which they received the response "Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies." This generated response therefore appears to correspond with the judge's final decision, which was to conclude that the entirety of the child's medical expenses and transport costs should be paid by his medical plan. The discovery of this approach has undoubtedly led to intense scrutiny by academics and legal professionals. Padilla has defended his use of technology, suggesting that it could make Colombia's legal system more efficient.

The incorporation of technology within the operation of the legal system raises a number of concerns. For one, the work load and pricing thereof for solicitors and barristers will need to be reassessed, either with effect that use of such tools increases efficiency and reduces case load, or with the effect that generated responses will need to re-evaluated by legal professionals if presented in court by a litigant in person, increasing the work load and time spent. 

Additionally, technology is incapable of producing the same answers as humans due to the lack of emotion, human thinking and critical thinking that is limited by programming and should not act as a substitute for the knowledge, expertise and judgement of a human judge. These factors have a huge impact on the decision making and approaches conducted by legal professionals and ultimately has a knock on effect on the rulings produced. 

The development of AI has caused immense moral panic in law, with people fearing robots would replace lawyers and judges. However, it has been argued that the tool will more than likely soon become accepted and commonplace, with such programs being using to "facilitate the drafting of texts" but "not with the aim of replacing" judges. Padilla argued the analogy that ChatGPT performs services previously provided by a secretary and did so "in an organised, simple and structured manner" that could "improve response times" in the justice system. A statement that I agree with, provided by Tejeiro, states that "It must be seen as an instrument that serves the judge to improve his judgement. We cannot allow the tool to become more important than the person." For this reason, the incorporation of AI driven technology in the legal industry requires continual research and guidelines to be produced, with digital literacy training also being provided to assist its implementation and maintain trust within the system.

In a speech delivered last week at Lincoln's Inn, Sir Geoffrey Vos stated that "GPT-4 and other advanced machine learning is likely to transform the work that lawyers need to do and possibly even...the business of judging." This statement comes from the discovery of a previous version of GPT-4 scoring in the bottom 10% of the US bar when it first took the exam, but with the most recent version scoring in the top 10%, demonstrating the speed at which generative AI is developing in the digital sphere. Sir Geoffrey Vos reiterates this point by stating "it perhaps makes the point that there is a real possibility that AI may become more intelligence and capable than humans." 

With this in mind, he further goes on predict that "AI will be used within digital justice systems and may, at some stage, be used to take some (at first, very minor) decisions," The controls that will be required in such a circumstances are "(a) for the parties to know what decisions are taken by judges and what by machines, and (b) for there always to be the option of an appeal to a human judge." The limiting factor on such a future lies within the confidence felt in the judicial system. Therefore, "it is likely that some decisions, for example intensely personal decisions relating to the welfare of children, that humans are unlikely to ever accept being decided by machines, however, in the commercial field, the controls that will be necessary on automated decision-making will rapidly become very complex indeed because of [...] the speed with which AI is being developed."

Sir Geoffrey's comments mark one of the first instances where a senior legal figure has acknowledged the impact of disruptive AI tools and has provided insight into its potential future position in the judicial system. Despite this, it is worth remembering that at AI is currently only an assistant, and not a replacement for important decision making. Just because a lawyer can look something up, doesn't invalidate the need for them to be able to understand the law, with skills still being needed to apply the knowledge usefully, from problem solving to being able to build a rapport with a client and sufficiently understand their needs with empathy and critical thinking.

Comments