In a landmark case underscoring the ethical implications of artificial intelligence (AI) use, U.S. District Judge Kevin Castel recently levied a $5,000 fine on attorneys Steven Schwartz and Peter LoDuca. The duo, from the law firm Levidow, Levidow & Oberman, were penalized for submitting AI-generated fake case citations in a court filing, an act deemed as bad faith by the judge.

The attorneys had admitted to using the AI model, ChatGPT, to generate the fake citations. While they were not ordered to apologize, they were directed to inform each judge falsely cited as an author of the fake filings and their client about the sanctions.

The law firm, in response to the judge’s order, stated their intention to comply but disagreed with the bad faith finding. They maintained that their mistake was in good faith, stemming from an over-reliance on AI technology.

Judge Castel’s 34-page opinion took into account the significant publicity the case had generated and the lawyers’ expressed remorse. He noted that the fake cases were not submitted for the lawyers’ financial gain and were not done out of personal animus. The judge also acknowledged that the lawyers did not have a history of disciplinary violations and were unlikely to repeat their actions.

This case serves as a stark reminder of the importance of ethical AI use and the necessity of human oversight in legal proceedings. It echoes the lessons from the Greek myth of Pandora’s box, where curiosity and a lack of caution led to unforeseen consequences. Here, the lawyers’ over-reliance on AI without fully understanding its limitations resulted in a significant professional and financial penalty.

The incident underscores the need for legal professionals to stay abreast of AI advancements while exercising due diligence and ethical considerations in their application. It’s crucial to remember that while AI can be a powerful tool, it should not replace human judgment and oversight. As this case illustrates, failing to apply these principles can lead to serious consequences. The use of AI, much like Pandora’s box, once opened, can have far-reaching and irreversible effects if not handled with care and caution.