A federal appeals court in New Orleans is considering a proposal that would mandate lawyers to confirm whether they utilized artificial intelligence programs to draft briefs, affirming either independent human review of AI-generated text accuracy or no AI reliance in their court submissions.
In a notice issued Nov. 21, the 5th U.S. Circuit Court of Appeals revealed what seems to be the inaugural proposed rule among the nation’s 13 federal appeals courts, focusing on governing the utilization of generative AI tools, including OpenAI’s ChatGPT, by lawyers presenting before the court.
The suggested regulation would apply to attorneys and litigants without legal representation appearing before the court, obliging them to confirm that if an AI program was employed in producing a filing, both citations and legal analysis were assessed for precision. Attorneys who provide inaccurate information about their adherence to the rule may have their submissions invalidated, and sanctions could be imposed, as outlined in the proposed rule. The 5th Circuit is open to public feedback on the proposal until Jan. 4.
The introduction of the proposed rule coincided with judges nationwide addressing the swift proliferation of generative artificial intelligence programs such as ChatGPT. They are examining the necessity for safeguards in incorporating this evolving technology within courtrooms. The challenges associated with lawyers utilizing AI gained prominence in June, as two attorneys from New York faced sanctions for submitting a legal document containing six fabricated case citations produced by ChatGPT.
Related: Sam Altman’s ouster shows Biden isn’t handling AI properly
In October, the U.S. District Court for the Eastern District of Texas introduced a rule effective Dec. 1, necessitating lawyers utilizing AI programs to “evaluate and authenticate any computer-generated content.”
According to statements accompanying the rule modification, the court emphasized that “frequently, the output of such tools might be factually or legally incorrect” and highlighted that AI technology “should never substitute for the abstract thinking and problem-solving capabilities of lawyers.”
Magazine: Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye
Source: Read Full Article