google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

Senior judge reveals AI could make court decisions ‘in minutes’

Artificial intelligence (AI) is being used “for all purposes” in the legal sector, one of the most senior judges in the country has revealed, as he assessed the technology’s decision-making power in court.

Master of records Sir Geoffrey Vos, the second most senior judge in the UK, told the Legal Expert Conference that AI was like a “chainsaw”, useful in the right hands but “super dangerous” in the wrong hands.

Sir Geoffrey said the technology could and should be used to draft contracts and research legal questions, noting in particular how the summarization ability of large language models (LLMs) could “save time and chore” when used carefully.

Addressing the possibility of AI making judicial decisions, the senior judge admitted that the technology, in its current form, could solve a case that required two years of human labor in just two minutes. However, he said the legal sector should “avoid” this situation occurring in practice.

Master of records Sir Geoffrey Vos (right) pictured with then justice minister Shabana Mahmood (C) and chief justice Sue Carr (L) (Getty)

This is because the judge’s decision is a last resort and often irreversible, he explains, and also because AI technology is incapable of ever replicating human “emotion, idiosyncrasies, empathy and insight.”

Sir Geoffrey adds that machine learning is also “generated from the state of intelligence at a particular point in time”; This means that its long-term use can quickly become inappropriate compared to the development of human thought.

The growing use of AI in the legal sector came under scrutiny earlier this year when the Supreme Court told senior lawyers to take urgent action to ensure the technology is not misused.

The intervention in June came after two legal cases were found to have been frustrated by the misuse of artificial intelligence, which allowed dozens of fake case law citations to be presented to the courts.

The original plaintiffs in the £89m damages case against Qatar National Bank, who cited 45 case law, 18 of which were fiction and many false citations, were concerned. While the plaintiff admitted that publicly available artificial intelligence tools were used, his lawyer admitted that he did not control the research his client conducted.

In the other case involving a regulatory decision, a lawyer from the Haringey Law Center cited non-existent case law five times. The student lawyer denied using AI, but said he may have done so inadvertently by relying on AI summaries provided by Google or Safari.

Dame Victoria Sharp, president of the King’s bench, said in her ruling that “if AI is misused, there will be serious consequences for the administration of justice and public confidence in the justice system” and that lawyers caught doing so could face public warnings or even contempt of court proceedings and be referred to the police.

“Such tools can produce seemingly consistent and reasonable responses to warnings, but these consistent and reasonable responses may turn out to be completely wrong,” he wrote.

Tahir Khan, a lawyer specializing in civil litigation, is an expert in the use of artificial intelligence in the legal sector. He says most such errors occur when lawyers use publicly available AI tools like ChatGPT, rather than legal industry-specific tools like those offered by legal intelligence firm Lexis Nexis.

But “if you’re using a tool that’s predominantly artificial intelligence, you still need to check it,” he says. “You can’t exonerate yourself by saying ‘It’s the tool’s fault’… the responsibility remains with you.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button