Module 9 of 10

Your Team’s AI Use Policy

What happens without a policy

Without shared guidelines, every person on your team improvises. One colleague enters client names into a public AI tool. Another refuses to use AI at all because it feels risky. A third uses AI for everything and reviews nothing. Three people, three different risk profiles, zero coordination. A policy doesn’t require everyone to use AI the same way — it just sets the floor that everyone agrees to.

Why a Policy Is a Tool, Not a Rule

Quick note on scope: In AI Made Simple, you designed a personal workflow for how you use AI — when to use it, when to review, when to hold back. This lesson scales that thinking to a team document. The audience for what you build here is everyone you work with, not just yourself — and the goal is shared clarity, not individual discipline.

The word “policy” makes people think of legal documents and compliance obligations. That’s not what this is. A communications team AI use policy is a practical document that answers three questions: what is AI approved for, what requires additional review, and what is off-limits. It exists to replace individual improvisation with shared expectations — and to protect both the team and the organization when something goes wrong.

A good policy is short enough that people actually read it, specific enough to be useful, and realistic enough that people actually follow it. A ten-page policy that no one reads is worse than a one-page policy that becomes part of how the team works.

Key Insight

The best AI policies don’t try to predict every scenario. They establish principles clearly enough that people can reason about new situations on their own. “Don’t input confidential client information” applies to every tool, every situation, every year — regardless of how the technology changes.

The Four Sections Every Policy Needs

Approved uses. What is AI explicitly authorized to help with on your team? Internal drafts, research summaries, social copy for review, email templates, talking point generation. Be specific so people don’t have to guess. If it’s on this list, they can use AI without asking each time.

Required review steps. What level of review is required before AI-assisted content goes external, gets attributed to a named person, or enters a regulated context? Map this to your content types. “Any AI-assisted press release must be reviewed for factual accuracy before distribution” is a reviewable, enforceable standard.

Prohibited uses. What is explicitly off-limits? Confidential client information in public tools. Unreleased financial data. Content that will be represented as entirely human-written when it isn’t, in contexts where that matters. Regulated content without sign-off. Keep this list short and clear — the items on it should be genuinely non-negotiable.

Version and review date. AI tools and organizational needs change faster than most policies. Include the date the policy was written and a planned review date. A policy that’s eighteen months old in a field that moves this fast may no longer reflect your team’s actual practice.

Getting It Adopted

A policy no one knows about doesn’t work. Share it in a team meeting. Make it findable. Connect it to the work people are already doing rather than presenting it as a new compliance requirement. The framing matters: this is a shared framework that makes everyone’s AI use more defensible, not a restriction on how people work.

    AI does well at…

  • Drafting a policy framework from your bullet-point inputs
  • Suggesting clauses or sections you may have missed
  • Rewriting a draft in plain language for non-technical audiences
  • Generating an FAQ based on common team questions
  • Producing a shorter summary version for quick onboarding

    AI doesn’t replace…

  • Deciding what your organization actually allows — that requires leadership input
  • Legal or HR review if the policy has formal standing
  • Getting team buy-in — that happens through conversation, not documentation
  • Updating the policy when tools or circumstances change
  • The judgment calls people will face that no policy fully anticipates

Today’s Activity

Draft a one-page AI use policy for your communications team. You’ll use AI to generate the initial framework and then customize it to reflect your organization’s actual context.

1
Step 1

Write a brief for AI: describe your team size, the types of content you produce, your industry, and any known constraints (regulated sector, client confidentiality requirements, leadership sign-off requirements). Be specific.

2
Step 2

Ask AI to draft a one-page AI use policy for your communications team using your brief. Request four sections: approved uses, required review steps, prohibited uses, and version date.

3
Step 3

Review the draft against your content risk map from Module 5. Align the policy’s categories with the risk tiers you already defined. The policy should feel like a natural extension of your existing frameworks, not a separate document.

4
Step 4

Edit for plain language. Remove any clause that sounds like a legal document but isn’t actually necessary. If a colleague who doesn’t think about AI daily would find something confusing, rewrite it.

5
Step 5

Save your policy draft as Module 9’s output. Even if you can’t formally adopt it without leadership input, having a draft ready makes that conversation easier to start.

✏️ Quiz

Test Your Knowledge

Take a short quiz to reinforce today’s key ideas.

Test Your Knowledge →