BrianOnAI logoBrianOnAI

Generative AI Policy Addendum

Supplement to AI Acceptable Use Policy addressing ChatGPT, Copilot, and LLM-specific risks. Covers approved/prohibited tools, data classification for GenAI, output verification requirements, disclosure rules, and code generation guidelines. Includes employee acknowledgment.

General

Get This Resource Free

Sign up for Explorer (free) to download this resource.

Create Free Account

Key Insights

Generative AI tools—ChatGPT, Claude, Copilot, DALL-E, and similar technologies—require specific policy guidance beyond general AI acceptable use. Employees need to know which tools are approved, what data can be entered, how to verify outputs, when to disclose AI assistance, and how to handle intellectual property concerns.

This addendum supplements your AI Acceptable Use Policy with GenAI-specific guidance. It covers the tools employees are actually using today and the risks specific to generative AI: training data exposure, output quality issues, copyright concerns, and disclosure requirements.

Overview

Generative AI is already in your organization—whether you've approved it or not. Employees are using ChatGPT for drafting, Copilot for coding, and image generators for content. The question isn't whether to allow GenAI, but how to govern it effectively.

This policy addendum provides the specific guidance employees need for generative AI. Customize the bracketed fields for your organization, get legal review, and deploy to bring GenAI under governance.

What's Inside

1. Purpose & Scope

  • Coverage of all generative AI tools: LLMs, chatbots, image generators, code assistants, voice/audio AI
  • Application to employees, contractors, third parties
  • Relationship to main AI Acceptable Use Policy

2. Approved & Prohibited Tools

  • Approved tools table: tool name, approved uses, data restrictions
  • Prohibited tools and uses:
    • Consumer/free-tier tools for business (unless approved)
    • Entering customer PII/PHI
    • Uploading confidential documents
    • Sharing proprietary code
    • Creating deepfakes or deceptive content
    • Using GenAI for high-stakes final decisions

3. Data Protection Requirements

  • Data classification matrix for GenAI:
    • Public: Any approved tool ✓
    • Internal: Enterprise tools only ⚠
    • Confidential: Not permitted ✗
    • Restricted/PII: Never permitted ✗
  • Training data concerns and vendor data protection terms

4. Output Quality & Verification

  • Mandatory human review requirement
  • Verification responsibilities
  • Fact-checking for factual claims
  • Code review for generated code
  • Legal review for legal content

5. Disclosure Requirements

  • When AI use must be disclosed
  • Customer-facing content requirements
  • Regulatory submission requirements
  • Marketing and advertising requirements

6. Intellectual Property

  • Ownership of AI-assisted work
  • Copyright considerations
  • Confidential information protection
  • Third-party IP respect

7. Incident Reporting

  • What constitutes a GenAI incident
  • Reporting procedures
  • Examples: data exposure, harmful output, copyright claims

8. Policy Compliance

  • Acknowledgment requirement
  • Violation consequences
  • Questions and guidance contacts

Who This Is For

  • AI Governance Teams developing GenAI policies
  • HR/Legal implementing employee guidelines
  • Compliance Officers managing GenAI risks
  • IT governing approved tools
  • All Employees understanding GenAI expectations

Why This Resource

GenAI moves too fast for policies that take months to develop. This addendum provides a ready-to-customize framework—fill in the bracketed fields, customize for your approved tools and data classification, get legal review, and deploy. You can have GenAI governance in place within days.

The focus on practical guidance (approved tools, data classification, verification requirements) makes the policy actionable for employees.

FAQ

Q: Should GenAI be a separate policy or addendum?

A: Addendum to your existing AI Acceptable Use Policy is typically faster to implement and easier to update as GenAI evolves. A separate policy works if your main policy doesn't exist or needs major revision.

Q: How do we determine which tools to approve?

A: Evaluate data handling practices (does the vendor train on your data?), security certifications, enterprise features (audit logs, admin controls), and contract terms. The approved tools table lets you specify restrictions for each approved tool.

Q: What about personal GenAI use on company devices?

A: The policy should clarify whether personal use is permitted on company devices/networks. Many organizations prohibit it to prevent inadvertent business data exposure.

What's Inside

1. Purpose & Scope

  • Coverage of all generative AI tools: LLMs, chatbots, image generators, code assistants, voice/audio AI
  • Application to employees, contractors, third parties
  • Relationship to main AI Acceptable Use Policy

2. Approved & Prohibited Tools

  • Approved tools table: tool name, approved uses, data restrictions
  • Prohibited tools and uses:
    • Consumer/free-tier tools for business (unless approved)
    • Entering customer PII/PHI
    • Uploading confidential documents
    • Sharing proprietary code
    • Creating deepfakes or deceptive content
    • Using GenAI for high-stakes final decisions

3. Data Protection Requirements

  • Data classification matrix for GenAI:
    • Public: Any approved tool ✓
    • Internal: Enterprise tools only ⚠
    • Confidential: Not permitted ✗
    • Restricted/PII: Never permitted ✗
  • Training data concerns and vendor data protection terms

4. Output Quality & Verification

  • Mandatory human review requirement
  • Verification responsibilities
  • Fact-checking for factual claims
  • Code review for generated code
  • Legal review for legal content

5. Disclosure Requirements

  • When AI use must be disclosed
  • Customer-facing content requirements
  • Regulatory submission requirements
  • Marketing and advertising requirements

6. Intellectual Property

  • Ownership of AI-assisted work
  • Copyright considerations
  • Confidential information protection
  • Third-party IP respect

7. Incident Reporting

  • What constitutes a GenAI incident
  • Reporting procedures
  • Examples: data exposure, harmful output, copyright claims

8. Policy Compliance

  • Acknowledgment requirement
  • Violation consequences
  • Questions and guidance contacts

Ready to Get Started?

Sign up for a free Explorer account to download this resource and access more AI governance tools.

Create Free Account