A recent incident involving Sullivan & Cromwell, one of the nation’s most prominent law firms, underscores the growing legal and reputational risks associated with the use of artificial intelligence. In a filing before the U.S. Bankruptcy Court for the Southern District of New York, the firm submitted a motion containing numerous errors generated by A.I., including fabricated case citations and inaccurate quotations from real decisions.
These errors were attributed to “A.I. hallucinations,” where A.I. systems generate plausible but incorrect legal information. Courts have increasingly confronted filings containing A.I. generated inaccuracies, including a widely reported 2023 case in which attorneys were sanctioned for submitting fictitious authorities produced by ChatGPT. In response, organizations such as the American Bar Association have issued guidance emphasizing that attorneys must independently verify all A.I-assisted research and drafting.
Key Takeaways for Businesses and In-House Teams
- While A.I. tools can improve efficiency, this incident highlights several important considerations:
- Accuracy remains a legal obligation: Courts expect filings to be grounded in verified authority, regardless of the tools used to prepare them.
- Reputational and strategic risk: Errors of this nature can undermine credibility with courts and opposing parties.
- Internal controls are critical: Organizations using A.I. in legal workflows should implement clear review protocols and accountability measures.
- Oversight is Important: Businesses should understand how staff are integrating A.I. into their practice and establish safeguards.
As A.I. adoption accelerates, companies should ensure that staff are using these tools responsibly and with appropriate oversight.
Our firm continues to monitor developments in this area. If you have questions about implementing A.I. tools in your legal or compliance functions, contact us at info@tristancervantes.com

