Basic Prompt Structure
Understand exactly how Claude processes your messages. Master the four-component framework that underpins every effective prompt.
How Claude Reads Your Message
Every interaction with Claude follows a structured conversation format. Unlike a search engine that parses keywords, Claude processes entire conversational contexts — understanding role, intent, constraints, and desired output all at once.
The Claude API uses a turn-based message format: alternating Human and Assistant turns. Understanding this structure is the foundation of every other technique in this course.
The Turn-Based Conversation Format
The Claude API separates messages into Human and Assistant turns. This maps directly to how you write prompts:
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Explain the difference between supervised and unsupervised learning."
}
]
)
print(message.content[0].text)
The Four Components of a Great Prompt
Every high-quality prompt contains some combination of these four elements. Not all are required for every prompt, but knowing them lets you diagnose why a prompt isn't working.
System Prompts: Setting the Stage
The system prompt is a special instruction block that runs before the conversation starts. It's where you establish Claude's persona, define constraints, and provide standing instructions that apply to every turn.
System prompts are invisible to end users (in most implementations) but have the highest priority for shaping Claude's behavior.
client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
system="""You are a senior Python developer with 15 years of experience.
You specialize in clean, production-ready code.
Always explain your reasoning.
Format code examples with syntax highlighting.
If a request is ambiguous, ask one clarifying question.""",
messages=[
{"role": "user", "content": "How do I handle database connection pooling?"}
]
)
Before vs. After: Structured Prompts in Action
See the difference a structured prompt makes. Both prompts ask for the same thing — but the outputs are dramatically different.
Anatomy of a Complete Prompt
Here is a fully annotated prompt demonstrating all four components working together:
# ROLE (sets expertise and tone)
You are a senior data scientist specializing in NLP.
# TASK (what to do — verb + object)
Classify the sentiment of the following customer reviews.
# CONTEXT (background that shapes the response)
These reviews are for a B2B SaaS product. "Neutral" means
the customer neither recommends nor discourages others.
Use the classifications for a stakeholder dashboard.
# INPUT DATA (the actual content to process)
Reviews:
1. "Onboarding took too long but the core features work."
2. "Transformed our entire reporting workflow. 10/10."
3. "Support team responded in 2 days. Expected faster."
# OUTPUT FORMAT (structure and schema)
Return a JSON array with objects containing:
- "id": review number
- "sentiment": "positive" | "neutral" | "negative"
- "confidence": 0.0 to 1.0
- "key_phrase": the most sentiment-bearing phrase
The Order of Components Matters
Claude reads prompts sequentially. Research and practice show that task-first ordering produces the most focused responses:
Practice Exercise
Take this weak prompt and rewrite it using all four components:
Explain neural networks.
Consider: Who is the audience? What depth? What format? What specific aspect? Try your rewrite in the Playground →