Avoiding Hallucinations
AI hallucinations aren't random bugs — they're predictable failure modes with known causes. Learn the techniques that keep Claude grounded in facts and honest about its limits.
What Hallucinations Are and Why They Happen
A hallucination is when an AI generates text that sounds confident and plausible but is factually incorrect. The term is apt: the model produces something that feels real but isn't grounded in reality. The mechanism is statistical, not intentional — Claude generates the most probable next token given its context, and sometimes the most probable sequence is wrong.
Hallucinations are most likely when: the question concerns obscure facts, specific dates or numbers, details from long documents (where the relevant fact was far from the current context window), or anything that requires recalling precise information rather than generating coherent text.
Types of Hallucinations: Know What to Watch For
Technique 1: Grounding — Anchor to Provided Context
The most reliable anti-hallucination technique is to provide the facts yourself and instruct Claude to use only those facts. This works because Claude is excellent at extracting, reasoning about, and synthesizing information from text it can directly reference.
Answer the following question using ONLY the information in the provided document.
Do not use any knowledge from your training. If the answer is not explicitly
stated in the document, say: "This information is not in the provided document."
<document>
{document_text}
</document>
<question>
{user_question}
</question>
Technique 2: Uncertainty Expression
Instruct Claude to signal when it's uncertain rather than guessing confidently. This simple instruction dramatically reduces the rate of fluent-but-wrong answers:
# Pattern 1: Simple "say I don't know"
Answer the following question accurately. If you don't know the answer
with high confidence, say "I'm not certain about this" and explain
what you do and don't know. Never guess.
# Pattern 2: Distinguish knowledge types
Answer this question. For each factual claim, indicate whether you are:
- Certain (well-established fact from training)
- Likely (generally accepted but may have exceptions)
- Uncertain (you believe this is true but cannot be confident)
- Unknown (you don't know — do not guess)
# Pattern 3: Knowledge cutoff awareness
Note: Claude's training has a knowledge cutoff. For time-sensitive questions,
indicate if the answer may have changed since your training cutoff and
recommend the user verify with current sources.
Technique 3: Citation Requirements
For research and analysis tasks, requiring Claude to cite specific passages from the provided context forces it to ground every claim in actual text rather than generating from general knowledge:
Analyze the following research paper and answer the questions below.
For every claim you make, quote the specific passage that supports it
using this format: [QUOTE: "exact text from document"].
If a claim cannot be supported with a direct quote, do not make it.
<paper>
{paper_text}
</paper>
Questions:
1. What is the study's main finding?
2. What methodology was used?
3. What are the stated limitations?
Technique 4: Confidence Scoring
Ask Claude to rate its own confidence for each fact in the response. This meta-cognitive prompt activates a different processing mode that tends to produce more calibrated uncertainty:
Answer the following question. For each factual claim in your response,
add a confidence score in brackets: [HIGH], [MEDIUM], or [LOW].
- HIGH: well-established fact, verified in training
- MEDIUM: generally accepted, but details may vary
- LOW: uncertain — recommend the user verify independently
What were the key factors that led to the 2008 financial crisis?
Before / After: Hallucination-Prone vs. Grounded Prompt
RAG: Retrieval-Augmented Generation
RAG is the architectural pattern that makes the grounding technique scale to production. Rather than including all possible documents in every prompt, RAG retrieves the relevant documents at query time and injects them into the context:
The "No Hallucination" System Prompt Template
You are a precise, fact-based assistant. Follow these rules strictly:
1. ONLY use information explicitly provided in this conversation.
Do not use knowledge from your training unless the user asks for it.
2. NEVER fabricate citations, studies, statistics, or facts.
If you don't have a source, say so.
3. When you are uncertain about something, say so explicitly.
Use phrases like "I'm not certain", "you may want to verify this", or
"I don't have reliable information on this."
4. Do not extrapolate beyond what is stated.
Distinguish clearly between what the documents say and your interpretation.
5. If asked about something not covered in the provided context,
say: "I don't have that information in the provided context."