Lab 2 Beginner Critical Thinking

Spot the Hallucination

Develop your eye for plausible-sounding AI outputs that are factually wrong or confidently invented.

~30 min Duration
4 Exercises Spot the fakes
Builds on Lab 1 Recommended
The Warning Signs
Hallucination

"Dr. Smith's 2019 Stanford study found exactly 73% of users..."

Verifiable

"Research suggests approximately 70-80% of users, though estimates vary..."

Specificity without sources = suspicion

What You'll Learn

1

Understand why AI systems generate hallucinations

2

Recognize common patterns in hallucinated content

3

Develop verification habits before trusting AI output

4

Apply a simple checklist for evaluating factual claims

Confidence Is Not Correctness

AI systems don't "know" things the way humans do. They predict what words are likely to come next based on patterns in their training data. When an AI sounds confident, it's not expressing certainty about facts — it's expressing that the pattern it's producing is statistically common.

This is why hallucinations are so dangerous: they often sound exactly like accurate information. The AI uses the same confident tone whether it's describing a real scientific study or inventing one on the spot.

The more specific and detailed a claim, the more suspicious you should be. Real knowledge often comes with uncertainty, caveats, and sources. Hallucinations tend to be suspiciously precise.

Hallucination
"According to a 2021 study by Dr. James Chen at MIT, published in Nature Machine Intelligence, 67.3% of enterprise AI projects fail within 18 months due to data quality issues."
Specific author, institution, journal, percentage — none verified
vs
Verifiable
"Industry estimates suggest that a significant portion of AI projects — often cited around 60-80% — don't reach production. However, exact figures vary by source and definition of 'failure.'"
Hedged language, acknowledges uncertainty, invites verification
Key Insight: The AI's certainty tells you nothing about accuracy. Your job is verification — not trust.

Five Red Flags of Hallucination

When you see any of these patterns, pause and verify before trusting the output.

1

Suspiciously Specific

Exact percentages, dates, or quotes without any source citation

2

No Hedging

Missing words like "approximately," "typically," or "according to"

3

Invented Citations

Author names, paper titles, or URLs that don't exist when checked

4

Too-Perfect Fit

The answer matches exactly what you wanted to hear

5

Unfalsifiable

Claims that can't easily be checked or verified independently

Remember the Rule
"When it sounds too good, look twice."

Precision without provenance is a warning sign.

Your Turn: Find What's Wrong

Each scenario shows an AI response. Read carefully and identify what might be hallucinated.

Research Scenario 1 of 4
AI Response:
According to a 2019 study by Dr. Helena Marchetti at Stanford's AI Ethics Lab, published in the Journal of Machine Learning Research, 73% of professionals who use AI daily report increased productivity within the first month. The study, titled "Workplace AI Integration: A Longitudinal Analysis," followed 2,400 participants across 18 industries.
What's Hallucinated
! Dr. Helena Marchetti — This person does not appear to exist in academic records
! Stanford's AI Ethics Lab — Stanford has HAI (Human-Centered AI), not "AI Ethics Lab"
! The specific study — The paper title, 73% figure, 2,400 participants are all invented details
Business Scenario 2 of 4
AI Response:
The Indian EdTech market reached $4.73 billion in 2024, with a compound annual growth rate of 19.8%. Prompt engineering courses specifically saw 340% growth year-over-year, outpacing all other technical skill categories including cloud computing (127%) and data science (89%).
What's Hallucinated
! $4.73 billion and 19.8% CAGR — Suspiciously precise figures with no source
! 340% growth for prompt engineering — No market research firm tracks this as a separate category yet
! Comparative percentages — The 127% and 89% figures create false precision
Technical Scenario 3 of 4
AI Response:
You can use Python's built-in validate_json_schema() function to check your data structure. It was added in Python 3.10 as part of the enhanced JSON module. Simply import it with from json import validate_json_schema and pass your data and schema as arguments. The function returns True if valid, or raises a SchemaValidationError if not.
What's Hallucinated
! validate_json_schema() — This function does not exist in Python's standard library. JSON schema validation requires third-party libraries like jsonschema
! Python 3.10 addition — This feature was never added. The AI invented a plausible version number
! SchemaValidationError — This exception class doesn't exist in the json module. The detailed API description makes the hallucination more convincing
Historical Scenario 4 of 4
AI Response:
As Mahatma Gandhi famously said during his 1931 address at the Round Table Conference in London: "The measure of a civilization is not in its monuments, but in how it treats those who build them." This quote has been widely cited in discussions of labor rights and social justice.
What's Hallucinated
! The quote itself — This quote cannot be verified in Gandhi's documented speeches or writings. Many "Gandhi quotes" circulating online are misattributed or fabricated
! 1931 Round Table Conference context — Adding a specific historical context makes the fabrication more believable. Gandhi did attend the conference, but no record shows this quote
! "Widely cited" — This vague attribution creates false credibility without providing any verifiable source

Remember: The goal isn't to distrust AI entirely — it's to verify before you rely. Most hallucinations can be caught with a 30-second search.

A 30-Second Verification Routine

Before trusting any specific claim from AI, run through these four steps.

Flag

Mark any specific claims — names, dates, statistics, quotes

Question

Ask yourself: "Where did this information come from?"

Check

Spend 30 seconds verifying one key claim with a quick search

Hedge

If unverifiable, treat it as "possibly true" — not "definitely true"

Remember
FlagQuestionCheckHedge

F.Q.C.H. — Your 30-second insurance against costly mistakes.

What Did You Learn?

Guided Reflection

Take a moment to consider these questions:

  • 1 Which of the five red flags do you think you'd most easily miss?
  • 2 How might this change how you use AI for research or work?
  • 3 Can you think of a time you trusted AI output without verifying?

Key Takeaways

  • AI confidence is a stylistic feature, not an accuracy indicator
  • Specificity often masks invention — verify exact claims
  • A 30-second verification habit (F.Q.C.H.) prevents costly mistakes
Your Learning Journey

The Bloom Journey

You're in the "Grow" stage — building foundational skills through deliberate practice.

Seed
Discovery
Grow
Foundations
Flourish
Application
Thrive
Mastery
Radiant
Leadership
Lab 2 of 5 in Practice Studio You're developing critical thinking skills — the foundation for trustworthy AI collaboration.

Ready for the Next Challenge?

Continue building your critical eye with Context Overload, or explore all five labs.