Course → Module 0: Why Most AI Content Is Garbage
Session 3 of 5

Google employs thousands of human quality raters worldwide. These raters do not directly influence search rankings. Instead, they evaluate search results using a detailed rubric, and Google uses their assessments to measure whether its algorithms are working. That rubric is publicly available. It is called the Search Quality Evaluator Guidelines, and it runs to over 170 pages.

If you want to understand what Google considers quality content, this document tells you directly. No speculation, no SEO blog interpretation. The criteria are written down.

E-E-A-T: The Quality Framework

The core quality framework in the guidelines is E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness. Google added the first E (Experience) in December 2022, upgrading the original E-A-T framework that had been in use since 2014.

Component What It Means How AI Content Typically Performs
Experience Was this produced by someone with firsthand experience? Fails. AI has no experiences.
Expertise Does the creator have relevant knowledge or skill? Simulates it. Produces expert-sounding text without actual expertise.
Authoritativeness Is the creator or site recognized as a go-to source? Depends on publishing domain, not content quality.
Trustworthiness Is the content accurate, honest, and safe? Variable. AI hallucinates facts and cites nonexistent sources.

Trustworthiness sits at the center of the framework. Google's guidelines explicitly state that Trust is the most important member of the E-E-A-T family. A page can have experience and expertise but still be untrustworthy if it contains inaccurate information or misleading claims.

AI content fails E-E-A-T at the first letter. It has no experience. Everything after that is simulation.

The Page Quality Rating Scale

Quality raters assign each page a rating on a scale from Lowest to Highest. The guidelines define specific characteristics for each level. The relevant levels for understanding the slop threshold are Lowest, Low, and Medium.

graph TD A["Highest Quality"] --> B["High Quality"] B --> C["Medium Quality
'Nothing wrong, but nothing special'"] C --> D["Low Quality
'Missing E-E-A-T for the topic'"] D --> E["Lowest Quality
'Harmful, deceptive, or spammy'"] style C fill:#2a2a28,stroke:#c8a882,color:#ede9e3 style D fill:#2a2a28,stroke:#c47a5a,color:#ede9e3 style E fill:#2a2a28,stroke:#c47a5a,color:#ede9e3

Section 3 of the guidelines describes "Lowest Quality" pages. The characteristics include: pages created to harm users, pages that deceive users, pages with no beneficial purpose, and pages with extremely low E-E-A-T. The guidelines call out several patterns that overlap directly with AI slop:

The Slop Threshold

The "slop threshold" is not a term from Google's guidelines. It is a useful concept for understanding where AI content sits on the quality scale.

Most AI-generated content, when produced without editorial oversight, lands somewhere between Low and Medium quality. It is not typically Lowest (it is rarely harmful or deliberately deceptive). But it lacks the experience, specificity, and editorial judgment that would push it above Medium. It sits in a gray zone: not bad enough to be removed, not good enough to be valued.

Content Type Typical AI Quality Level What's Missing
Generic how-to articles Low to Medium Firsthand experience, specific tools/brands, failure cases
Product descriptions Medium Actual product testing, comparative judgment
Industry analysis Low Real data, named sources, original conclusions
Medical/legal/financial Lowest to Low Professional credentials, accurate specific advice
Opinion/editorial Lowest An actual human opinion backed by actual experience

YMYL: Where the Stakes Are Highest

The guidelines define "Your Money or Your Life" (YMYL) topics as those that could impact a person's health, financial stability, safety, or wellbeing. For YMYL content, E-E-A-T standards are applied more strictly. AI-generated medical advice, financial guidance, or legal information sits firmly in the danger zone because the potential for harm from inaccurate information is real.

But the YMYL concept extends further than most people realize. Google's guidelines apply elevated standards to any topic where inaccurate information could cause real-world harm. This includes news, civic information, product safety, and even some categories of consumer advice.

What This Means for AI Content Production

The guidelines are not anti-AI. Google has stated explicitly that AI-generated content is not inherently against their guidelines. What matters is quality, not origin. An AI-assisted article that includes real expertise, genuine experience, accurate information, and editorial oversight can score High or even Highest.

The guidelines define a clear standard. Content that meets E-E-A-T criteria ranks well regardless of how it was produced. Content that fails E-E-A-T criteria eventually gets pushed down, regardless of how much of it you publish. Understanding the rubric is not about gaming the system. It is about understanding what quality means at Google's scale, and then building processes that consistently hit that standard.

Further Reading

Assignment

  1. Download Google's Search Quality Evaluator Guidelines. Read Section 3 on Page Quality.
  2. Write a 1-page summary of what Google considers "Lowest Quality" content. List the specific characteristics.
  3. Identify which of those Lowest Quality markers overlap with typical unedited AI output. Create a two-column table: Lowest Quality Marker | How AI Content Exhibits This.
  4. Based on your analysis, write one paragraph answering: can AI content meet Google's quality standards? Under what conditions?