Wading into the unknown: Important generative AI policy considerations

This article, written by attorney Cassandra Rodgers, was originally published by Seacoastonline and can be found here.


While “artificial intelligence” or “AI” is not a new term, it has emerged as the “buzz” word over the last several years.  With OpenAI’s launch of ChatGPT on Nov. 30, 2022 and the barrage of other generative AI models since, no business or industry is immune from the question of: how can we appropriately take advantage of generative AI?

This question recognizes two truths of generative AI use: (1) generative AI offers valuable efficiencies; and (2) it comes with risks.  Those weighing these conflicting realities of generative AI use find themselves wading into the unknown—desiring guidance, but coming up virtually empty.  Without clear federal or state guidance by way of judicial decisions or legislation, businesses and industries are forced to draft their best guess at effective generative AI policies.

The remainder of this article identifies the overarching themes derived from bar association guidance published to date that businesses or industries should be considering when drafting and implementing generative AI policies or guidance:

  1. Review Inherent Risks.  Conducting a cost-benefit analysis is commonplace in running any business and businesses’ use of AI should be no exception.  In order to effectively weigh the use of AI and any particular platform, businesses need to be aware of the risks associated with generative AI.  While the following is not an all-inclusive list, characteristic risks associated with generative AI include: (1) false information; (2) biased responses; (3) third-party review or use of data; (4) inadvertent disclosure of confidential information; and (5) security breaches.  Though some of these risks can be addressed through training or informed selection of AI platforms, the first step of risk management is learning of the risk.
  • Scrutinize AI Platforms.  There is no “one-size-fits-all” with AI platforms.  In fact, not every generative AI service offered by a particular company is the same.  It is critical that you know your options and scrutinize potential generative AI providers’ terms of use. Ask yourself questions such as:
    • What security measures are in place?
    • What is the provider’s policy on retention or use of data?
    • Who has access to the data?
    • What obligations are imposed to ensure compliance with the terms of use?
    • Is there an obligation to notify your business upon a breach of protocol or security?

In doing this diligence, consider the reputation of the AI platform and review any grievances raised by current users.  Significantly, recognizing your own knowledge limitations is crucial to this analysis—while you don’t have to be the AI expert, you should rely upon those that are (such as IT or cyber security professionals).

  • Provide and Enforce Policies.  It is not simply enough to have a policy, but it must be updated and enforced.  Those in a managerial or supervisory positions should: (1) clarify policies surrounding permissible use; (2) make reasonable efforts to ensure compliance; and (3) promote and monitor regular training.
  • Education and Training.  Like with any tool, effective use of generative AI takes practice and requires education.  Individuals need not only learn how to use AI, but also be informed of platform variety, providers’ policies, and your business’s policies related to use.  The rapid rate of evolution of these tools also necessitates continuing education.
  • Acknowledge User Responsibility.  Generative AI use is a fact intensive venture.  Accordingly, it requires a level of independent verification.  Professional judgment should not (or, in the case of the lawyer with ethical obligations, cannot) be substituted.  These AI platforms are constructed tools—they are not infallible.
  • Filter Out Confidential Information.  Again, for the lawyer subject to ethical rules, safeguarding client confidences is nonnegotiable.  For those in the business sphere, the desire to protect both your and your client’s confidential information may be driven by legal compliance or best practice.  Businesses should carefully consider generative AI inputs.  For those using self-learning platforms, inputs may be saved and later used in a response to others not privy to the information or, worse, to your client’s competitors. AI platforms are third parties and you should exercise the same degree of diligence in protecting confidences as you would with any other third party.
  • Disclosure.  While this article concentrates on the risks associated with generative AI, your business is, nevertheless, likely to adopt it in some capacity.  Irrespective of the underlying motivation for adoption, AI policies should incorporate a degree of disclosure and client informed consent.
  • Stay Current.  In the time it has taken to write this article, a new AI-based feature has likely emerged.  Stay vigilant in your assessment of AI-based services, scrutiny of terms of use, cost-benefit analysis in adopting a platform, training on generative AI use—and, of course, policy revision and enforcement.

No one can say for certain where the next AI wave will lead, but don’t let your business or industry be left adrift by a failure to recognize and adopt these trending policy safeguards.