By Sara Merken
Dec 27 (Reuters) - Generative artificial intelligence technology made its mark on the world in 2023, and courts of law were no exception.
U.S. judges grappled with the use of evolving AI tools in their courtrooms, particularly after lawyers made headlines for submitting legal briefs with fictitious case citations that were generated by tools like OpenAI's ChatGPT.
Experts said the courts – and the broader legal industry – would feel even greater impacts from generative AI in 2024 and beyond.
"I think we are just seeing the beginnings of it right now. And I think that the changes are going to accelerate in the years ahead," said Andrew Perlman, the dean of Suffolk University Law School.
Two New York lawyers were sanctioned in June after a judge found they filed a brief with six fake, AI-generated case citations and later misled the court. A Colorado lawyer was temporarily suspended from practicing law in November over a similar episode.
The lawyers in those cases said they misunderstood the technology. But judges have said that's no excuse.
A growing number of judges have issued orders since the spring governing how attorneys with cases before them can use AI tools, which are prone to making things up.
The orders fall into several categories, said Shannon Capone Kirk, global head of advance e-discovery and AI strategy at law firm Ropes & Gray. Some seek to educate, while others prohibit the use of AI altogether. Most require disclosure of the use of AI or verification of the information, she said.
Kirk's team has identified 17 such federal or specialty court orders and one state court order so far. She said she expects that in 2024, "we will see a multitude of these orders coming out, either by specific judges or whole courts."
U.S. District Judge Brantley Starr of the Northern District of Texas in May became one of the first U.S. judges to require lawyers to certify they did not use AI to draft their filings without a human checking their accuracy.
Judges in other jurisdictions across the country followed with their own guidance or mandates.
The U.S. District Court for the Eastern District of Texas introduced a rule that took effect this month requiring lawyers using AI programs to "review and verify any computer-generated content."
The 5th U.S. Circuit Court of Appeals, which covers federal courts in Texas, Mississippi and Louisiana, proposed a similar certification requirement last month and is accepting public comment through Jan. 4. Lawyers who misrepresent their compliance with the rule could be sanctioned and have their filings stricken.
Bar associations are also weighing the use of generative AI in legal practice. The American Bar Association in August formed a group to assess AI's impact and to probe ethical questions that the technology poses.
The California Bar last month approved new guidance for lawyers using the technology, and the Florida Bar's professional ethics committee has issued a proposed advisory opinion and is accepting public comment.
Suffolk University's Perlman said explicit rules on lawyers' use of generative AI are "ill-advised." Existing professional conduct and other rules for lawyers already cover their obligations to vet documents for accuracy, he said.
"Lawyers regularly use AI and generative AI without even realizing it," such as in legal research tools or in Microsoft Word, he said.
"I think generative AI is going to be the most transformative technology that the legal profession has ever seen," he said.
(Reporting by Sara Merken)