NY.- Two lawyers expressed their repentance before an angry judge in a federal court in Manhattan and blamed ChatGPT for having misled them in such a way that they included a fictitious legal investigation in a document filed with the court.
Lawyers Steven A. Schwartz and Peter LoDuca could be penalized for a document they included in a lawsuit against an airline, which referenced previous court cases that Schwartz thought were true, but were actually made up by the AI-powered chatbot. .
Schwartz explained that he used the innovative program to seek legal precedents to support a client’s case against Colombian airline Avianca for an injury suffered during a flight in 2019.
The chatbot, which has fascinated the world with its essay-like responses to user requests, suggested a number of aviation incident cases that Schwartz had been unable to find through the usual search methods used by his law firm.
The problem was that several of those cases either never happened or involved airlines that didn’t exist.
Schwartz told Judge P. Kevin Castel that he “acted on the misconception … that this website was getting cases from some source that I did not have access to.”
Schwartz said he “failed miserably” to do follow-up research to ensure the references were correct.
“I didn’t understand that ChatGPT could make up cases,” he added.
Microsoft has invested around $1 billion in OpenAI, the company behind ChatGPT.
The success of ChatGPT, which shows that artificial intelligence could change the way humans act and learn, has raised fears among some. Hundreds of industry leaders signed a letter in May warning that “reducing extinction risk from AI should be a global priority on par with other societal risks such as pandemics and nuclear war.”
Judge Castel seemed both puzzled and upset by the unusual incident, and disappointed that the lawyers had not moved quickly to correct the false legal references when first brought to their attention by their Avianca counterparts and the court. Avianca exposed the false jurisprudence in a document filed in March with the court.
The judge confronted Schwartz with a legal case made up by the computer program. The matter was initially described as a wrongful death case brought by a woman against an airline, but morphed into a lawsuit involving a man who missed a flight to New York and incurred additional expenses.
“Can we agree that this is legal nonsense?” Castel asked.
Schwartz said he mistakenly believed that the confusing presentation had resulted from extracts obtained from different parts of the case.
When Castel was done with his questioning, he asked Schwartz if he had anything else to add.
“I want to sincerely apologize,” Schwartz stated.
The lawyer said he had suffered personally and professionally for this blunder and that he felt “ashamed, humiliated and extremely sorry.”
He claimed that he and the firm where he worked—Levidow, Levidow & Oberman—had put in place safeguards to ensure something similar never happened again.
LoDuca, the other attorney working on the case, said he trusted Schwartz and failed to properly review what his partner had compiled.
After the judge read parts of one of the mentioned cases aloud to show how easy it was to tell that it was inconsistencies, LoDuca said, “I never thought it was a false case.”
LoDuca stated that the result “saddens me greatly.”
Ronald Minkoff, a lawyer for the law firm, told the judge that the delivery of the document “was due to carelessness, not bad faith,” and that it should not lead to penalties.
He noted that lawyers have historically struggled with technology, particularly modern technology, “and it’s not getting any easier.”
“Mr. Schwartz, who does very little federal research, decided to use this new technology. He thought he was using an ordinary search engine,” Minkoff claimed. “What he was doing was playing with live ammunition.”
Daniel Shin, an adjunct professor and deputy director of research at the Center for Legal and Judicial Technology at William & Mary School of Law, said he made the Avianca case during a conference last week that drew dozens of in-person and online participants from state and federal courts in the United States, including the federal courthouse in Manhattan.
He said the issue caused shock and bewilderment during the conference.
“We’re talking about the Southern District of New York, the federal district that handles big cases, from September 11, 2001 to big financial crimes,” Shin said. “This was the first documented case of possible professional misconduct by a lawyer using generative AI.”
He said the case showed that lawyers might not understand how ChatGPT works because it tends to hallucinate and talk about fictional things in such a way that they seem real but aren’t.
“It highlights the dangers of using promising AI technologies without being clear about the risks,” Shin said.
The judge said he will rule on the sanctions at a later date.
#lawyers #ChatGPT #misled