The consistency problem
You reviewed Monday’s AI draft carefully and caught three issues. On Thursday, under deadline pressure, your “review” was a thirty-second read. The issues you would have caught on Monday went out on Thursday. A written checklist doesn’t compress under pressure the way a mental habit does.
Most communicators who use AI review their output before publishing. The problem isn’t the intention — it’s the inconsistency. Without a written process, the review expands and contracts based on time available, confidence level, and how polished the output looks. A convincing-looking draft gets less scrutiny than a rougher one, even though both carry the same underlying risk.
A quality control checklist removes that variability. It defines the minimum standard of review for AI-assisted content, regardless of deadline pressure or how good the draft looks on first read. It’s the difference between a process and a habit — and processes are more reliable.
Key Insight
A QC checklist isn’t meant to slow you down — it’s meant to make you faster by removing the cognitive load of deciding what to check each time. Once the checklist exists, the only question is whether you ran it.
Accuracy. Are the facts, statistics, dates, and named references correct? Were they in your original brief, or did AI generate them? This maps directly to your verification workflow from Module 3. Any Tier 1 item on your verification checklist belongs here.
Voice. Does the output sound like your organization? Have you run it against your voice brief from Module 4? Did the AI tells from Module 6 get edited out? Read it aloud. If you wouldn’t say it that way, rewrite it.
Legal and compliance. Does the content contain any specific claims, regulated language, or confidential information? Does it fall into any of the categories on your content risk map from Module 5 that require review before external use?
Format and completeness. Is everything that should be there, there? Are required disclosures included? Is the structure correct for the format (press release, internal memo, social post)? Is anything missing that your brief asked for?
A checklist that takes twenty minutes to run will be skipped when you’re busy. Design yours to be completable in under five minutes on a typical piece of AI-assisted content. That means fewer, more specific items — not a comprehensive audit, but a targeted scan of the highest-risk elements. The goal is a checklist you actually run every time, not an ideal one you run when you have time.
One practical format: a numbered list of ten to twelve items, organized under the four category headers, with a single yes/no decision for each. If everything is yes, it’s ready to go. If anything is no, you know exactly what to fix.
Build your QC checklist. This module’s output is a written, ready-to-use document that consolidates your work from Modules 3, 4, 5, and 6 into one review process.
Pull out your outputs from Modules 3 (verification checklist), 4 (voice brief), 5 (content risk map), and 6 (AI tells list). These are the raw materials for your QC checklist.
Draft ten to twelve checklist items organized under four headers: Accuracy, Voice, Legal/Compliance, Format. Each item should be a yes/no question: “Are all statistics sourced from my brief or verified independently?” “Does the opening sentence sound like our org, not like AI?”
Ask AI to review your checklist draft and suggest any items you may have missed for your specific content types. Add anything useful. Remove anything redundant.
Test your checklist on two pieces of AI-generated content: one you’re confident in, one you’re not. Does the checklist catch the issues in the second piece? Does it clear the first without adding unnecessary friction?
Save your QC checklist as Module 7’s output. Keep it somewhere accessible — your desktop, your notes app, your email drafts. You should be able to open it in under ten seconds.