Spot the Hallucination
Develop your eye for plausible-sounding AI outputs that are factually wrong or confidently invented.
"Dr. Smith's 2019 Stanford study found exactly 73% of users..."
"Research suggests approximately 70-80% of users, though estimates vary..."
What You'll Learn
Understand why AI systems generate hallucinations
Recognize common patterns in hallucinated content
Develop verification habits before trusting AI output
Apply a simple checklist for evaluating factual claims
Confidence Is Not Correctness
AI systems don't "know" things the way humans do. They predict what words are likely to come next based on patterns in their training data. When an AI sounds confident, it's not expressing certainty about facts — it's expressing that the pattern it's producing is statistically common.
This is why hallucinations are so dangerous: they often sound exactly like accurate information. The AI uses the same confident tone whether it's describing a real scientific study or inventing one on the spot.
The more specific and detailed a claim, the more suspicious you should be. Real knowledge often comes with uncertainty, caveats, and sources. Hallucinations tend to be suspiciously precise.
Five Red Flags of Hallucination
When you see any of these patterns, pause and verify before trusting the output.
Suspiciously Specific
Exact percentages, dates, or quotes without any source citation
No Hedging
Missing words like "approximately," "typically," or "according to"
Invented Citations
Author names, paper titles, or URLs that don't exist when checked
Too-Perfect Fit
The answer matches exactly what you wanted to hear
Unfalsifiable
Claims that can't easily be checked or verified independently
Precision without provenance is a warning sign.
Your Turn: Find What's Wrong
Each scenario shows an AI response. Read carefully and identify what might be hallucinated.
validate_json_schema() function
to check your data structure. It was added in Python 3.10 as part of the
enhanced JSON module. Simply import it with from json import
validate_json_schema and pass your data and schema as arguments. The function returns True if valid, or raises a SchemaValidationError if not.
jsonschema
Remember: The goal isn't to distrust AI entirely — it's to verify before you rely. Most hallucinations can be caught with a 30-second search.
A 30-Second Verification Routine
Before trusting any specific claim from AI, run through these four steps.
Flag
Mark any specific claims — names, dates, statistics, quotes
Question
Ask yourself: "Where did this information come from?"
Check
Spend 30 seconds verifying one key claim with a quick search
Hedge
If unverifiable, treat it as "possibly true" — not "definitely true"
F.Q.C.H. — Your 30-second insurance against costly mistakes.
What Did You Learn?
Guided Reflection
Take a moment to consider these questions:
- 1 Which of the five red flags do you think you'd most easily miss?
- 2 How might this change how you use AI for research or work?
- 3 Can you think of a time you trusted AI output without verifying?
Key Takeaways
- AI confidence is a stylistic feature, not an accuracy indicator
- Specificity often masks invention — verify exact claims
- A 30-second verification habit (F.Q.C.H.) prevents costly mistakes
The Bloom Journey
You're in the "Grow" stage — building foundational skills through deliberate practice.
Ready for the Next Challenge?
Continue building your critical eye with Context Overload, or explore all five labs.