Ask the Expert: Governing the Supernatural: Building Your AI Business Strategy

This article, written by attorney Brian Bouchard, was originally published by the NH Tech Alliance and Seacoastonline.com and can be found here.


AI is here, and its use is inevitable. For small businesses and startups, AI is practically a competitive requirement.

Imagine starting a business with full agentic AI at your disposal. This is AI that can create (i.e., generative AI) but also reason, process, respond to environments and act within the digital universe. Need a marketing plan with ad word targets? Done. A financial plan with populated loan application forms? Done. Reviewing resumes and cover letters for new employees? Done. An executive assistant to answer phone calls, manage calendars, bill clients, and book travel? Done. Tech companies are rapidly developing suites of apps to handle these administrative business functions—at least on an elemental level.

Before unleashing AI, however, it must be tamed and trained—not just by large tech companies, which are spending millions to out-program AI’s demons, but by individual businesses. For small businesses and startups, taming AI involves understanding its risks and setting parameters for its use through a comprehensive business plan.

AI Legal Risks

The risks of using AI are extensive. They include misinformation and hallucinations; copyright infringement for AI-created content; cybersecurity and data privacy breaches; defamation (particularly if AI is used to draft published content); disclosure of proprietary or privileged information; algorithmic discrimination (including in the workplace, in reviewing renter or loan applications, and in evaluating medical records); and one-sided indemnification provisions in provider contracts (the recent consideration of Indeed as an “employment agency” is an alarming bellwether for this). It takes only one data breach or discriminatory event to wreak havoc for any organization, so caution is paramount.

Governance Best Practices

Succeeding with AI starts with proper business governance. Every business should adopt a business plan that covers when AI will be deployed, how it will be used, and how it will be constrained and monitored. Waiting to develop formal governance is unwise. Businesses should scale with AI, addressing its implications proactively, rather than waiting until widespread use and destructive habits have already formed.

  • Identify Accepted AI Tools and Uses: Every AI plan should identify the permitted tools, providers, and uses of AI. Welcoming AI into your business shouldn’t be an all-or-nothing approach. A business might decide that some tasks, such as using AI to write client letters, are acceptable, while using AI to write publishable copy is not. Keep an inventory of AI tools and uses.
  • Identify Prohibited Uses: Perhaps more important than identifying accepted uses, every AI plan should identify what uses are strictly prohibited. These include any use that might disclose proprietary, confidential, privileged, or personally identifiable information.
  • Data Use.  Every plan must also address how AI will comply with privacy laws in the United States and the European Union regarding collection, processing, and storage.
  • Identify Allowed Users: Just as businesses should intentionally select specific AI tools, they should also limit which employees or agents use those tools. Higher-risk roles, such as product development, marketing, and recruiting, may have limited AI access or receive additional scrutiny regarding their AI use.
  • Include an Audit and Monitoring Procedure: The most important provision of any AI plan is its audit procedure. This provision must identify how and when the business will monitor AI to ensure legal risks are avoided, mitigated, and contained. This includes receiving periodic vendor audit reports about data usage, storage, and bias.
  • Reinforce Human Responsibility: Trusting AI to perform correctly—particularly generative models—is frighteningly easy, but the technology is not infallible. Users must know they are ultimately responsible and must carefully verify all results.  AI is still prone to hallucinate and discriminate.  Saying “the algorithm did it” is never a defense.
  • Provide for Mandatory Reporting: Users must report any known violation of the plan, including instances of discrimination and the unauthorized disclosure of confidential information.
  • Training: As your business expands, training will be paramount. The risks of corporate AI use are nuanced, complex, and far-reaching. Owners, managers, and frontline employees should receive training about how to use AI safely.
  • Regulatory Compliance: The AI regulatory landscape is evolving rapidly. Colorado and New York City are the only governments with broad AI laws, but this patchwork of AI laws will grow significantly in the next year or two.  Businesses must be aware of emerging laws and ready to comply.

Most businesses like AI, notably its cost savings, productivity, and innovation benefits. But, in my experience, few know where to start. My recommendation: start with an AI business plan and good governance practices. Only 26% of companies have them, but an AI plan will force you to answer key questions about how AI can be used safely, securely, and effectively.