The conversation you’ve probably had
You mention you used AI to draft something. A colleague raises an eyebrow. A manager asks if that’s really appropriate. A client says they prefer to know everything is human-written. These conversations are not going away — and winning them matters as much as using AI well.
The most important thing to understand about AI skeptics is that most of their concerns are legitimate. AI does produce hallucinations. It can homogenize voice. It creates data privacy questions. It raises issues about attribution and authorship. Dismissing these concerns as technophobia doesn’t build trust — it confirms the skeptic’s suspicion that AI enthusiasts aren’t thinking carefully.
The most effective case for AI isn’t “AI is great and you’re wrong to worry.” It’s “here’s exactly how we use it, what we check, and where the human judgment stays in the loop.” That argument is harder to object to, because it doesn’t require the skeptic to trust AI — it only requires them to trust you.
Key Insight
The strongest case for AI use is a demonstration of control, not enthusiasm. When you can show someone your verification workflow, your QC checklist, and your voice brief, you’re not selling AI — you’re showing them a disciplined professional process that happens to include AI.
“How do we know it’s accurate?” This is the hallucination concern. Answer it with your verification workflow: “Every piece of AI-assisted content with specific claims goes through a three-tier verification process before it leaves my desk. Here’s what that looks like.” Show the process, not just the assurance.
“Will it sound like us?” This is the voice concern. Answer it with your brand voice brief: “We give AI a specific voice brief that defines our tone, vocabulary, and style. AI output gets edited against that standard before it’s used.” Having the brief in hand makes this concrete.
“What about confidential information?” This is the data privacy concern. Answer it with your content risk map: “Our policy is that confidential client information, unreleased data, and legally sensitive content don’t go into AI tools. Here’s how we define those categories.”
“Isn’t this just cutting corners?” This is the quality concern. Answer it by reframing: “AI handles the structural scaffolding faster. That frees up more time for the editorial judgment, relationship context, and review that AI can’t do. The final quality standard hasn’t dropped — the path to get there is more efficient.”
Don’t oversell. Claims like “AI does everything in seconds” or “it’s basically perfect” invite skepticism and set you up for credibility problems when something goes wrong. Don’t minimize legitimate concerns. And don’t hide AI use when it’s relevant — if someone asks directly, honest disclosure is always the right answer.
Build your skeptic response playbook. You’ll identify the AI skeptics in your professional world, map their concerns, and draft responses you can use in real conversations.
Identify two or three people in your professional life who are skeptical of AI use in communications — a manager, a client, a colleague. For each person, write their most likely concern in one sentence.
For each concern, draft a one-paragraph response that acknowledges the concern, explains your actual process, and points to a specific safeguard (your verification workflow, voice brief, or content risk map).
Ask AI to role-play the skeptic for each person. Give it the concern you wrote in Step 1 and have it push back on your response. Refine until the response holds up.
Edit AI’s suggestions so the final language sounds like you, not like a prepared statement. The response needs to be natural enough to say out loud in a real conversation.
Save your skeptic response playbook as Module 8’s output. Two to three pages covering the most common objections you’re likely to face, with responses you’ve already rehearsed.