google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

Judge blasts lawyer for using AI after he cited ‘entirely fictitious’ cases in asylum appeal

An immigration lawyer could face disciplinary action after judge’s ruling AI tools like ChatGPT Prepare legal research.

A court heard: judge Chowdhury Rahmancompletely imaginary” or “completely irrelevant.”

A judge found Mr Rahman had tried to “hide” this when questioned and had “wasted” the court’s time.

The incident occurred while Mr Rahman was representing two Honduran sisters seeking asylum in the UK because they were being targeted by a violent criminal gang called Mara Salvatrucha (MS-13).

They claimed asylum after arriving at Heathrow airport in June 2022 and during screening interviews said the gang wanted them to be “their women”.

They also claimed that gang members had threatened to kill their families and had been searching for them since they left the country.

One of the officials cited to support his case was previously incorrectly assigned by ChatGPT (AP).

In November 2023, the Home Office rejected their asylum claim, stating that their accounts were “inconsistent and unsupported by documentary evidence”.

They appealed the matter to the first-tier court, but the application was rejected by a judge from MS-13 who “did not accept that the appellants were the target of negative attention”.

The case was then referred to the Upper Tribunal, where Mr Rahman acted as counsel. During the hearing, he argued that the judge failed to adequately assess credibility, made a legal error in assessing documentary evidence, and failed to take into account the impact of internal displacement.

However, these claims were similarly rejected by Judge Mark Blundell, who dismissed the appeal and ruled that “nothing Mr Rahman said, orally or in writing, constituted an error of law on the part of the judge”.

But in a postscript at the bottom of the judgment, Judge Blundell referred to “significant issues” arising from the appeal in relation to Mr Rahman’s legal research.

The judge discovered after reading that some of the 12 authorities named in the appeal did not exist, while others “did not support the legislative proposals referred to in the reasoning.”

After investigating this, it found that Mr Rahman appeared “unfamiliar” to legal search engines and was “consistently unable to grasp” where to direct the judge in the cases he cited.

Mr Rahman said he used “various websites” to conduct his investigation, and the judge noted that one of the cases cited had recently been incorrectly used by ChatGPT in another legal case.

Judge Blundell stated that given Mr Rahman “knew nothing” about any of the officials he quoted, some of whom did not exist, all of his statements were “misleading”.

“In my judgment it is highly likely that Mr Rahman used generative AI to formulate his grounds of appeal in this case and attempted to conceal this fact from me during the hearing,” Judge Blundell said.

“He has been called to the Bar of England and Wales and it is simply impossible that he misunderstood all the authorities named in his grounds of appeal to the extent I have set out above.”

He concluded that he was now considering reporting Mr Rahman to the Bar Standards Board.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button