Why Lawyers Should Be Careful with AI
A recent article by Julie Sobowale at The Walrus highlights the plight of Candian lawyer Chong Ke. Ke is a lawyer with the Westside Family Law Clinic in Vancouver.
Ke represented a father applying to get more time with his kids. However, when lawyers representing the mother examined the application, they noticed that two of the cases cited didn’t exist.
That’s because Ke used ChatGPT to draft the application. The AI “hallucinated” the cases, causing the judge to deny the application. The Law Society of British Columbia is now investigating Ke and weighing possible disciplinary action.
However, while Ke is the first Canadian lawyer to face consequences for using AI, she’s far from being the first lawyer. In Colorado, a lawyer received a 90-day suspension for submitting an AI-generated document with false citations. In New York, there have been two separate cases of lawyers submitting AI-generated documents with hallucinated citations.
That said, the biggest push may not be from lawyers. Instead, it may be from laypeople using AI systems to represent themselves. As a recent study by Allens pointed out, AI systems, especially publicly available ones, struggle with accuracy when dealing with legal issues.
However, these issues haven’t stopped some AI products from being marketed to pro se litigants, even if some of those have not gone to plan.
This raises a simple question: Why is combining AI and the legal field such a dangerous but tempting mix? It comes down to a simple thing: Time.
Why You Shouldn’t Trust AI
The problem with AI systems is simple: In their current state, AIs are prone to hallucinations. Essentially, they make information up, repeat incorrect information or mistake parody for fact.
While this is bad enough when a student uses an AI to generate an essay, it is much worse in the legal field. Lawyers have an obligation to ensure that their filings are accurate. Failure to do so can have severe consequences for them and their client.
As such, submitting AI-generated documents to the court is risky. David Wong, the chief product officer at Thomson Reuters, said in an interview with the San Francisco Chronicle that he encourages lawyers to treat AI like a “smart intern” whose work needs to be thoroughly vetted.
This is true even of AI systems built explicitly for legal use. As The Walrus article noted, Lexum, an AI created by Canlii, a Canadian legal research website, has an error rate of 5 percent. While that is impressive, it guarantees the system will make mistakes.
Because of that, submitting AI output without extensive human verification is deeply problematic. So, while there are ways AI may be able to help lawyers, it’s going to be a long time before it’s wise to submit an AI-generated filing.
Why It’s Tempting
In many ways, the legal field seems like the last one that should be turning to AI for shortcuts. It is highly specialized, and mistakes can have dire consequences. AI seems like a high-risk, low-reward gamble.
However, as The Walrus reported, many lawyers struggle with heavy caseloads and tight deadlines. According to the Federation of Law Societies of Canada, more than half of all lawyers experience burnout.
Just like a student struggling to finish an essay, it can be tempting to generate a document when deadlines are too much. However, where a student risks a bad grade or school disciplinary action, lawyers who take this shortcut risk themselves and their clients.
This can be especially difficult for new and inexperienced lawyers. Not only do they often get a lot of the “grunt” work, but they may also lack expertise or interest in the areas in which they are working.
Combining these things makes it easy to see why AI is tempting. Though it is a bad idea, I can see why some might turn to AI, even when it’s inappropriate.
Better Uses of AI
None of this is to say that AI has absolutely no place in the legal field. Instead, as with other fields, it’s best as a tool to aid human work, not replace it.
In the San Francisco Chronicle article, litigation consultant Greg McCullough discussed using Everlaw to parse tens of thousands of documents to find the most important ones to review.
The legal field routinely requires this of attorneys and those who work with them. Parsing thousands of documents by hand is time-consuming and can be frustrating when only a handful are relevant. While AI doesn’t replace human analysis, it can aid it.
However, even that is risky. What if AI misses a vital document or misunderstands the case fundamentally? Relying on AI summaries and document analysis may not bring as much trouble as filing false documents with the court, but they aren’t risk-free either.
If AI always needs to be checked by humans, is it worthwhile? Does it save time?
Probably not. But it does provide another layer. After all, humans make mistakes, too. Having an AI check behind the humans can offer an extra layer of protection.
Simply put, AI is not good enough to replace humans, especially in the legal field. However, sometimes, it can support humans, which may be very helpful to some lawyers and their clients.
Bottom Line
The legal field needs to be cautious about when and how it uses AI. Obviously, using ChatGPT to generate a legal filing without human oversight is reckless. However, as we’ve seen, lawyers have done precisely that.
The rush to embrace AI needs to be tempered by the risk involved. AI is not a magic bullet in any profession, and the legal field presents unique challenges to its adoption.
I’d imagine a slow-to-no-adoption approach is best for most legal professionals. I am not using AI in my work as an expert witness and have no plans to adopt it.
That said, it makes sense to seek ways AI can supplement human work. Rather than trying to use AI to save time, lawyers and other legal professionals should use AI to do better work. Have it be an extra pair of “eyes” on a crush of documents or help proofread your writing.
It may not be the shortcut that so many want, but if it improves outcomes, it can still be very useful.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.