Course → Module 1: What Makes Slop, Slop
Session 8 of 10

Most people read passively. They scan a headline, skim the content, absorb fragments, and move on. This worked when the majority of published content was produced by humans with at least minimal editorial oversight. In an environment where the majority of new content is AI-generated, passive reading is a liability.

Critical reading is not suspicion. It is not assuming everything is fake. It is a systematic method for evaluating whether a piece of content is worth your time and trust.

The Five-Question Framework

You can evaluate any piece of content in under sixty seconds using five questions. Each question targets a specific quality signal.

# Question What It Tests Red Flag Answer
1 Who wrote this? Accountability and authorship No author name, generic byline, or "staff writer"
2 What specific experience informs it? E-E-A-T Experience signal No personal details, no case studies, no firsthand knowledge
3 Where do the claims come from? Evidence and sourcing "Studies show" with no links, "experts say" with no names
4 Does any sentence contain information only the author could know? Originality and expertise Every fact could be found by searching the same query
5 Would this text change if a different person wrote it? Voice and perspective The text is interchangeable. Anyone (or any AI) could have produced it.

The critical reading question is not "Is this AI-generated?" The question is "Does this contain information, experience, or perspective that required a specific human to produce?"

Applying the Framework

The framework is fast because each question has a binary outcome: either the content passes or it does not. You do not need to read the entire article. Scan the byline (Question 1), read the first three paragraphs (Questions 2-4), and check whether the perspective is unique (Question 5).

graph TD A["Encounter content"] --> B["Q1: Who wrote this?"] B -->|"Named author with bio"| C["Q2: What experience
informs this?"] B -->|"No author / generic"| Z["Low trust signal
Proceed with caution"] C -->|"Specific experiences cited"| D["Q3: Where do
claims come from?"] C -->|"No personal experience"| Z D -->|"Named sources, links"| E["Q4: Unique information
present?"] D -->|"'Studies show' / no citations"| Z E -->|"Yes: author knows
something I don't"| F["Q5: Would text change
with different author?"] E -->|"No: generic knowledge"| Z F -->|"Yes: distinct voice"| G["High trust:
Worth reading carefully"] F -->|"No: interchangeable"| Z

Most AI-generated content fails at Question 2 or Question 4. It lacks specific experience and contains no information that could not be assembled from a basic web search. This is not because AI is incapable of producing useful content. It is because most AI content is produced without the inputs (real experience, original data, expert review) that would make it pass these tests.

The Source Verification Layer

When content passes the first five questions, add a verification layer for any claims that inform decisions.

Claim Type Verification Method Time Required
Statistical claims ("40% of...") Search for the original study or data source 2-5 minutes
Expert quotes Verify the person exists and said what's attributed 1-3 minutes
Product/tool recommendations Check if the product exists and does what's claimed 1-2 minutes
Historical claims Cross-reference with a second source 2-3 minutes
Process/method descriptions Test whether the described process actually works Variable

AI hallucination makes source verification more important than it has ever been. AI generates plausible-sounding citations to non-existent papers, attributes quotes to people who never said them, and describes products with features they do not have. These are not lies in the human sense. They are pattern completions: the model predicts what a citation would look like and generates one, without checking whether it corresponds to reality.

Calibrating Your Filter

Critical reading is a skill that improves with practice. The initial tendency is to be either too trusting (accepting everything at face value) or too suspicious (dismissing everything that looks like it might be AI-generated). Neither extreme serves you.

The goal is calibrated skepticism: applying the right amount of scrutiny to the right content. A peer-reviewed paper in a recognized journal requires less initial scrutiny than an anonymous blog post. A detailed case study with named clients and specific outcomes deserves more trust than a listicle with generic advice. The framework helps you allocate your attention where it matters.

Over time, the five questions become automatic. You will scan content and register the quality signals without conscious effort. The framework moves from deliberate practice to intuitive assessment. That intuition, the ability to sense quality and its absence quickly and accurately, is one of the most valuable skills in an information environment flooded with generated text.

Further Reading

Assignment

  1. Develop your own version of the 5-question evaluation framework. You may modify the questions above or create entirely new ones based on what matters most for your domain.
  2. Test your framework on 10 articles: aim for a mix of AI-generated and human-written content. For each article, apply your 5 questions and record pass/fail for each.
  3. Document your accuracy rate: how often did your framework correctly identify the origin (or at least correctly identify quality)?
  4. Refine the framework based on your results. Which questions were most diagnostic? Which need revision?