Artificial intelligence has begun reshaping courtrooms across the United States, empowering ordinary citizens to challenge rulings while ensnaring even seasoned lawyers in scandals of fabricated evidence. As AI tools like ChatGPT enter the legal arena, they are exposing both the promise—and peril—of automation in the pursuit of justice.
When Justice Meets the Algorithm
In a California courtroom earlier this year, Lynn White, a tenant facing eviction, defied the odds. Without a lawyer and months behind on rent, she turned to an unlikely ally—ChatGPT. The AI chatbot, along with the search engine Perplexity, helped her identify procedural errors in a judge’s decision, draft court responses, and ultimately overturn her eviction.
“I never, ever could have won this appeal without AI,” White says, recalling the months of litigation that had left her financially and emotionally exhausted.
The Rise of the Pro Se Revolution
Paralegals and attorneys across the country are noticing a marked rise in self-representation, or “pro se” litigation, as individuals use generative AI to navigate complex legal systems. “I’ve seen more and more pro se litigants in the last year than I have in probably my entire career,” said Meagan Holmes of Thorpe Shwer LLP.
For people without access to legal counsel, AI can seem like a lifeline—one that drafts pleadings, explains case law, and simulates legal arguments in seconds. Yet, while tools like ChatGPT may appear to level the playing field, experts warn they can also deliver dangerously inaccurate information, including fabricated citations and false precedents.
“What I can’t understand,” said attorney Robert Freund, “is an attorney betraying the most fundamental parts of our responsibilities to our clients—and making arguments based on total fabrication.”
FCRF Launches CCLP Program to Train India’s Next Generation of Cyber Law Practitioners
When Lawyers Fall Into the Same Trap
The pitfalls of AI are not confined to amateurs. Even professional lawyers—trained in evidence and ethics—have fallen prey to the allure of automated assistance.
In one widely reported case, a California attorney was fined $10,000 after submitting a court appeal riddled with fake case citations—21 of the 23 quotes were entirely fabricated. The judge’s scathing decision described it as “yet another unfortunate chapter in the story of artificial intelligence misuse in the legal profession.”
Other examples abound. Energy drink magnate Jack Owoc was sanctioned after filing a motion filled with hallucinated citations, while a New York attorney caught using AI-generated text in court was later discovered to have used AI again to explain his mistake.
Even tech companies are distancing themselves. Google and xAI both explicitly warn users not to rely on AI for legal advice, cautioning against “high-stakes automated decisions that affect a person’s safety, legal, or material rights.”
Between Access and Accountability
For every cautionary tale, there are also stories of triumph. In New Mexico, fitness entrepreneur Staci Dennett used AI to negotiate a settlement over unpaid debt. Like White, she credited generative AI with giving her the confidence to stand her ground.
Yet the divide between empowerment and peril remains stark. While AI has opened new paths for citizens priced out of justice, it has also created a minefield of misinformation that can derail even legitimate claims.