Langella & Langella | Personal Injury Attorney - Hauppauge, NY

Chat GPT and The Law: How AI Opens “Pandora’s Box” for Attorneys and Clients Alike

August 14, 2023

Chat GPT and The Law: How AI Opens “Pandora’s Box” for Attorneys and Clients Alike


Love it or hate it, AI is coming. Once considered a far-off technology that likely wouldn’t be achieved for another 50+ years, if at all; it appears that a fully functioning “General AI” may be just around the corner. And while it’s currently impossible to really imagine how such a disruptive technology would affect American society, we are already starting to see how its little brother the “Large-Language Model” AI is creating headaches for people in every walk of life.


Large Language AIs are not quite what someone would call a legitimate cognitive AI model. Rather, it is a type of AI that must be fed an endless stream of existing content, and trained to recall that content in plain language when prompted with a question. Which means that it can already be considered a massively powerful tool (or potential replacement) for researchers, writers, students, and anyone else who needs to generate written content, or pull answers out of huge swathes of text information.


The big issue with current AI products like Chat GPT though, comes down to the quality of information that is being used to train it, and the method by which it provides its “answers”. Like any protocol or process, if you feed it inaccurate information, or contradictory statements, the output can quickly become suspect. It’s the age old mantra of “garbage in, garbage out”. And with most current AIs being trained by huge buffets of written content scrubbed from the public internet, this is introducing serious issues when it is being asked to participate in disciplines that typically require a human practitioner with advanced degrees and research credentials.


Combine that with the fact that AIs have no concept of “truth” in a strict sense, and that they will always attempt to generate an answer for a given prompt, and you run into situations where the AI will often confidently present a result that sounds good to the reader, but is partially or entirely inaccurate or fabricated. It is something like the software equivalent of the “one-upper”, who can never admit when they don’t have a good grasp of a topic, and will intellectually grab at straws to appear competent in a conversation.


If this sounds like a recipe for potential disaster in the legal field, that’s because it is. The most obvious issue would be your average lay person relying on Chat GPT for what amounts to legal advice. In the age of the internet, people are used to researching things like law and medicine online, by exhaustively googling their issue until they have enough information to feel informed. But in the case of web page articles and forum discussions, the context of the information tends to inform the reader as to the level of trust that they should place in that information.


With Chat GPT, there are no context clues as to how legitimate the information it provides might be. The original information that trained it on the question it is attempting to answer could have come from anywhere, and as stated before, GPT will deliver that information in a well composed and authoritative sounding way regardless of the original quality of its data. For the average person who just needs answers, this end product seems borderline miraculous. And many people will likely trust it outright without doing any supplementary research.


For regular people with legal needs, this is obviously a potentially tragic event waiting to happen. But what about legal practitioners? For anyone who practices, you would likely think that there is no way that any good attorney would rely on something like Chat GPT to perform their legal research. But you would be wrong. Around the net there are already countless stories from lawyers who have begun to use Chat GPT to help with performing case work. While these attorneys by and large only use the software to dig up the initial information they need, before drilling down on the research appropriately, the horror stories have already started hitting the newswire.


Writing motions and other legal documents can be tedious, but it’s part of the job. Unless you get a bit over-eager, let your professional ethics slip a bit, and decide to let Chat GPT write those motions for you. In a recent Aviation Liability case, an attorney did just this, to DISASTROUS results. With Chat GPT citing multiple cases that it literally fabricated from whole cloth, and cobbling together an otherwise nonsensical motion that the attorney submitted without fact checking at all. Catch part two of this article series at the end of this month for an in-depth look at this case, and others with “interesting” results.

October 11, 2024
As of 2024, pedestrian death rates in the United States have reached a stark 41-year high, highlighting a troubling surge in fatalities among those on foot. In 2021 alone, approximately 7,500 pedestrians lost their lives, representing a 13% increase from the previous year.
By Keanae Monroe October 10, 2024
Talking Motorcycle Safety Part 2 - Minimizing Danger in Bad Conditions
More Posts
Share by: