This article, written by attorney Brian Bouchard, was originally published by NHBR and seacoastonline.com and can be found here.
Recent news stories about artificial intelligence and the revolutionary breakthroughs showcased by OpenAI’s ChatGPT made me curious as an employment lawyer: Can AI accurately answer legal questions and draft employment documents, like non-solicitation agreements? It cannot — at least not with consistent accuracy.
AI churns out confident-sounding, personalized answers but those answers often miss the mark. Here’s my experience so far. I went for broke on the first question. I asked the software to write an employee non-solicitation clause enforceable in New Hampshire. To my amazement, it wrote one.
Looks passable, right? It even includes buzz phrases like “directly or indirectly.” But looks are one thing; substantive compliance is another. Most problematically, the AI-generated clause impliedly includes all prospective and potential clients of the company, which is verboten in New Hampshire. While prohibiting solicitation of active prospects the employee was courting while employed with the company might fly, it is unlikely a New Hampshire court would enforce the broad language generated here. There are other issues of being too broadly written because the clause is not tailored to the legitimate needs of the company. The advice at the end about consulting legal counsel turns out to be good advice.
I next asked the software for the differences between Federal and New Hampshire tip pooling laws. Here’s the response:
Much of this is either misleading or wrong. For example, it’s misleading because it says the “employer is not allowed to keep any portion of the tip pool” but doesn’t explain that the term “employer” includes “managers” and “supervisors.” It also totally fumbled New Hampshire law. In New Hampshire, employers cannot require employees to participate in tip pools. NH RSA 279:26-b requires that any tip pooling or tip sharing must be completely voluntary and without coercion.
Changing the search inputs generates different results. Indeed, it seems that the more legal jargon used, the more accurate the results become. Here’s an exchange about ADA (Americans with Disabilities Act) protection:
This answer mostly tracks the ADA’s definition of a “qualified individual with a disability” and isn’t wrong per se. The fact that it personalizes the answer to “Mark” is impressive — but could also just be a fun parlor trick.
The takeaway is that to receive a correct answer from AI, the user often has to be knowledgeable enough to know what to ask and how to ask it. It took me three or four tries to get the ADA answer above, and even then much was left unsaid about what constitutes an essential function of the job. The answer also lacks nuance about when the individual may be entitled to job reassignment or preferential treatment in the interview process as an accommodation. Perhaps most frustratingly, I rarely received the same answer for the same inputs.
Using AI for legal compliance is like playing information roulette. The AI technology is very clever and sometimes will provide the right answers (and very quickly) but often it will not. The user must ultimately be competent enough to know the difference.
None of this is surprising. OpenAI’s website discloses that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Confirmed. Users may be tempted to trust the software anyway. Unlike a Google search that simply directs the user to potentially relevant websites, ChatGPT provides a unique and personalized narrative response in real-time. Users can watch the AI type its response. That personal touch creates an alluring but false air of credibility.
Other problems lurk beneath the surface. If anyone thinks that questions asked to ChatGPT are confidential or privileged, think again. Savvy plaintiff’s attorneys might soon ask for ChatGPT inputs and results in their lawsuit discovery requests. Legal woes could await the HR professional who asked a “hypothetical” question about how to terminate “Mark” even with his alleged disability.
The OpenAI technology is breathtaking and at times a little spooky. At one point in my experiment, it drafted an official looking legal complaint based on a simple timber trespass inquiry. Technology like ChatGPT could one day provide legal compliance information with consistent accuracy and disrupt our entire white-collar service economy. Today is not that day, however. Until that day, businesses still need qualified employment counsel to draft their employment agreements and to help manage all manner of legal compliance.
But if you think information roulette is more your speed, I encourage you to look at the U.S .Department of Labor’s press releases from 2022 showing aggressive (and public) enforcement efforts with huge sums of money collected; you may feel differently about trusting your organization’s legal compliance to AI.