The Review Process
Session 11.6 · ~5 min read
No Single Pass Catches Everything
You have a quality rubric from Session 11.5. Now you need a process to apply it. A single reviewer doing a single pass will miss things. Not because of laziness, but because different types of problems require different types of attention. Reading for voice is a different cognitive task than reading for factual accuracy. Doing both simultaneously means doing both poorly.
The solution is layered review. Three layers, each targeting a different class of problem, each using a different method. Self-review, peer review, tool-assisted review. In that order.
Layered Review: A multi-pass review process where each layer targets a specific category of problem using a specific method. No layer is redundant because each catches what the others miss.
The Three Layers
Voice, structure, factual errors you recognize"] B --> C["Layer 2: Peer Review
Clarity, assumptions, blind spots"] C --> D["Layer 3: Tool-Assisted Review
Grammar, readability, AI markers, formatting"] D --> E{Rubric Score?} E -->|"40+"| F[Publish] E -->|"30-39"| G[Fix and Re-review Layer 1] E -->|"Below 30"| H[Regenerate] style B fill:#c8a882,color:#111 style C fill:#6b8f71,color:#111 style D fill:#8a8478,color:#ede9e3 style F fill:#6b8f71,color:#111 style H fill:#c47a5a,color:#111
Layer 1: Self-Review
You read the content yourself. You are checking for three things:
- Voice breaks. Places where the text stops sounding like you (or your target voice) and starts sounding like generic AI. You can detect this because you know what your voice sounds like. No tool can do this as well as you can.
- Factual errors you recognize. Claims that contradict what you know from direct experience. You do not need a search engine for these. Your domain knowledge is the detector.
- Structural problems. Sections in the wrong order. Arguments that do not build. Conclusions that do not follow from the evidence presented. These are visible when you read for logic rather than surface quality.
Self-review should take 10-15 minutes for a 1,000-word piece. If it takes longer, the content probably needs regeneration rather than repair.
Layer 2: Peer Review
A second person reads the content. They are checking for three things you cannot check yourself:
- Clarity. You understand your own text because you know what you meant. A peer tests whether the text communicates without that background knowledge.
- Hidden assumptions. You make assumptions without realizing it. A peer spots the places where you assumed shared context that does not exist.
- Blind spots. After self-review, you have editorial fatigue for this specific piece. A fresh reader sees the problems you have stopped noticing.
Peer review does not require a professional editor. It requires a reader from your target audience who will tell you the truth. One honest reader is more valuable than three polite ones.
Layer 3: Tool-Assisted Review
Tools check what humans miss through inattention:
| Tool Category | What It Catches | Example Tools |
|---|---|---|
| Grammar checker | Mechanical errors, punctuation, agreement | Grammarly, LanguageTool, ProWritingAid |
| Readability scorer | Sentence complexity, passive voice, word choice | Hemingway Editor, readable.com |
| AI marker scanner | Patterns from the 15 forensic markers | Custom AI review prompt (from Session 11.3) |
| Format checker | Heading consistency, link validity, image alt text | Custom script or CMS validation |
| Fact-check pipeline | Verifiable claims against search results | Tavily-based workflow (from Session 11.2) |
Tool-assisted review is the fastest layer. Most tools process a 1,000-word piece in under a minute. The value is not in replacing human judgment but in catching the mechanical issues that human reviewers skip because they are focused on higher-order problems.
The Review Protocol
A protocol makes the process repeatable. Write yours down. Print it. Follow it for every piece of content. Here is a template:
| Step | Action | Time Budget | Pass/Fail Criteria |
|---|---|---|---|
| 1 | Self-review: read aloud, check voice and structure | 15 min | Voice consistent, no recognized factual errors, logical flow |
| 2 | Peer review: send to reader, collect feedback | 24 hr turnaround | No clarity issues, no hidden assumptions flagged |
| 3 | Tool review: grammar, readability, AI markers, facts | 10 min | Readability grade on target, zero grammar errors, fewer than 3 AI markers |
| 4 | Final rubric score | 5 min | Score 40+ to publish |
The total time investment for a 1,000-word piece: about 30 minutes of active review time, plus the peer review turnaround. This is the cost of quality. Pay it or accept lower standards.
Further Reading
- Inside The New York Times's A.I. Toolkit, Investigative Reporters and Editors (2025)
- AI Guidelines for Researchers, Wiley Publishing (2025)
- AI Policies in Academic Publishing 2025: Guide & Checklist, Thesify
- Is AI Raising Content Quality Standards?, Jasmine Directory
Assignment
Put one piece of content through all three review layers. Self-review first: mark every issue you find. Then send to a peer and collect their issues. Then run through your tool stack. Compile all issues found by each layer into a single table with columns: Issue, Layer That Found It, Severity (high/medium/low). Which layer caught the most critical issues? Which caught the most trivial? Use this data to design your review protocol.