
Once upon a time, I used to formally teach legal research and writing at a variety of institutions. For the most part, students did what they were told given enough instruction and examples - with exception. I remember two times students went off the beaten path and did things not quite how it was intended.
Instance 1: Students were to write an
objective memorandum (legal research 101 stuff). While students were told they could collaborate with each other, they were explicitly told to submit they own unique papers. Out of a class of 21, two students submitted a paper that were identical. I mean, same periods, commas - word for word the same. Because neither would say whose paper it was (i.e. who wrote it), they both got zeros for that assignment.
Instance 2: For a homework assignment. Students were to derive search queries and locate information using legal databases. One enterprising student copied the entire question and pasted it into Google. In academic circles, this is considered cheating (since student didn't derive a search query - he just used Google's algorithm to locate information - wrong information but somehow he got an answer).
in both instances, the students in question insisted they had done nothing wrong. In instance 1, they saw nothing wrong with turning in the same paper and that I was just supposed to figure out who did what. In instance 2, student argued that he was being creative, and took the issue up with the ethics board (they actually held a court case). People can get kicked out of law school for things like that. He wasn't but it was a possibility.
Of course, this all was brought to my mind when I read an article about the use of Artificial Intelligence ("AI") in court documents. What was at issue here were two lawyers using online programs like Chatgpt to locate caselaw to help draft legal documents. The problem is that many times, information cranked out by AI resources may look right but it is not actually correct. What this means is that, the cases looked official (correct citation notation) but they didn't even exist!
According to the article, seems a
federal court in New York fined a law firm $5,000 for using case law in a motion that didn't exist. What this means is that when the attorneys ran their search, the AI service returned several cases. Problem was, while the cases were in in proper format and had a full decision, they didn't exist (i.e. the AI service (i.e. Chatgpt) made them up. Happens all the time.
The thing with AI that most people don't realize is that it isn't infallible. It can give you information quickly (much like Google) but that doesn't mean it's accurate.
In the case with this case, the lawyers who wrote their motion just presumed that the case law spat out by the AI was the real deal (i.e. they were real cases). However, they clearly didn't do their due diligence and Shepardize said cases or, otherwise, check to see if they were real cases.
Much like many students these days, these lawyers just went ahead and used those bogus cases with nary a care in the world figuring that if the AI said the cases were real, that was good enough for them. Besides, computers never lie, right?
What was most disturbing was that the law firm, at which these lawyers were employed, didn't realize it screwed up (i.e. no one checked their work) AND insisted it had done nothing wrong. In its response to the court, the law firm stated:
We respectfully disagree with the finding that anyone at our firm acted in bad faith. We have already apologized to the Court and our client. We continue to believe that in the face of what even the Court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.
First, these lawyers did act in bad faith but not checking the validity of the cases used (i.e. whether the cases were real).
Second, essentially what the law firm said was we're a bunch of bohemians who use AI with reckless abandon and think AI is infallible that would never purposely spit out results that were wrong or otherwise incorrect or misleading and that even if the cases we used don't actually exist, the court was wrong to insist that we, in fact, use cases that do exist (because AI is infallible and is never wrong. Never ever, ever, ever, ever, ever, forever).
Uh huh.
The thing that is lost on this law firm is that AI lies all the time. Results are based on how it is programmed and what queries are inputted over time.
But what amazes me is that the attorneys were not disbarred (or in the very least suspended) for recklessly misleading the court and the client. Ignore the fact that the law firms says it didn't act in bad faith - it did by not doing their due diligence in researching and correctly preparing court documents and by doing so, mislead the court (and their client) that the cases used in court documents were real.
I mean, come on! This is 1st year law student stuff.
But I guess lack of integrity is what passes as regular practice by lawyers in New York.
Sad, that.
No comments:
Post a Comment