Course → Module 8: The Pipeline
Session 5 of 10

The Gate That Separates Production From Slop

This is the stage most people skip. They generate a draft, skim it, think "looks fine," and publish. That is how slop gets published under real names.

Human review means a person reads every word. Not skims. Reads. With a checklist. With attention. With the willingness to reject the entire draft and regenerate if it does not meet standards. There are no exceptions. There is no "it's probably fine." Automating this stage is the single most common mistake in AI content production.

Why "AI Checks AI" Does Not Work

The tempting shortcut is to have another AI model review the draft. The problem is that AI cannot reliably detect its own failure modes. A model that hallucinated a statistic will not flag that statistic as hallucinated, because within its generation context, the statistic was plausible enough to produce. AI detects surface errors (grammar, formatting) well. It detects deep errors (factual accuracy, voice authenticity, missing nuance) poorly.

AI-assisted review tools have a role. They can flag potential issues for human attention. But the human makes the final call. Always.

Human review is expensive in time. It is non-negotiable in quality. The question is not whether to do it. The question is how to do it systematically so nothing slips through.

The Five-Point Review Checklist

Every piece of content that reaches Stage 4 gets evaluated against five dimensions. Each dimension has specific, observable criteria.

flowchart TD A["Draft Arrives"] --> B{"1. Factual Accuracy"} B -- Pass --> C{"2. Voice Consistency"} B -- Fail --> X["Regenerate"] C -- Pass --> D{"3. Structural Integrity"} C -- Fail --> Y["Rework"] D -- Pass --> E{"4. AI Artifact Check"} D -- Fail --> Y E -- Pass --> F{"5. Publication Test"} E -- Fail --> Y F -- Pass --> G["Advance to Stage 5"] F -- Fail --> Y style A fill:#222221,stroke:#c8a882,color:#ede9e3 style B fill:#222221,stroke:#c47a5a,color:#ede9e3 style C fill:#222221,stroke:#6b8f71,color:#ede9e3 style D fill:#222221,stroke:#8a8478,color:#ede9e3 style E fill:#222221,stroke:#c47a5a,color:#ede9e3 style F fill:#222221,stroke:#c8a882,color:#ede9e3 style G fill:#222221,stroke:#6b8f71,color:#ede9e3 style X fill:#222221,stroke:#c47a5a,color:#ede9e3 style Y fill:#222221,stroke:#c47a5a,color:#ede9e3
Dimension What You Check How to Check Failure Threshold
1. Factual accuracy Every verifiable claim, statistic, date, name Cross-reference against research brief; spot-check 3+ claims via search Any unverifiable claim presented as fact = fail
2. Voice consistency Sentence rhythm, vocabulary, tone markers Read aloud; mark every sentence that sounds "off" More than 20% voice breaks = rework
3. Structural integrity Outline compliance, argument flow, transitions Compare draft to outline section by section Missing sections or reordered argument = rework
4. AI artifacts The 15 forensic markers from Module 1 Scan for hedging, tricolons, false bridges, enthusiasm spikes More than 5 artifacts per 1000 words = rework
5. Publication test "Would I publish this under my name?" Honest gut check after passing all four checks above Any hesitation = rework

The Read-Aloud Test

Reading aloud is not optional. It is the single most effective detection method for voice breaks and AI artifacts. Your ear catches what your eye skips. When you read silently, your brain autocorrects awkward phrasing. When you read aloud, the awkwardness becomes audible.

Read the entire draft aloud from start to finish. Every time you stumble, pause, or feel the urge to rephrase, mark that sentence. Those marks are your editing targets for Stage 5.

Annotation Protocol

Review produces an annotated draft, not a verdict. Each issue gets a specific annotation:

This annotation system serves two purposes. First, it gives Stage 5 (Editing) specific, actionable targets. Second, it builds a pattern database. If every draft comes back with [VOICE] tags on the opening paragraph, your system prompt needs adjustment. If [FACT] tags cluster around statistics, your research brief needs more data points.

Review is not editing. Review identifies problems. Editing fixes them. Separating these stages prevents the common trap of fixing issues while reading, which causes you to miss other issues because your attention split.

Time Budget

A thorough review of a 1,000-word piece takes 15 to 25 minutes. That feels slow. It is the correct speed. Faster review means missed issues. Missed issues mean published slop. If you are producing 10 pieces per day, that is 2.5 to 4 hours of review. Plan for it. Budget for it. It is the most valuable time in your entire pipeline.

Further Reading

Assignment

Review the draft you generated in Session 8.4. Use the five-point checklist:

  1. Factual accuracy: verify at least 3 specific claims against your research brief or a search engine.
  2. Voice consistency: read aloud and mark every sentence that does not sound like your voice fingerprint.
  3. Structural integrity: compare the draft to your outline section by section.
  4. AI artifacts: use the 15-marker checklist from Module 1. Count every instance.
  5. Publication test: would you put your name on this?

Annotate the draft using the [FACT], [VOICE], [STRUCTURE], [ARTIFACT], and [MISSING] tags. Count the total annotations. This number tells you how much work Stage 5 needs to do.