Large language models are remarkably capable — and remarkably sensitive to how you phrase your request. Prompt engineering is the practice of crafting inputs that reliably elicit the output you want. This guide covers the techniques that consistently work across GPT-4, Claude, Gemini, and similar models.
Why Prompting Matters
The same model produces radically different outputs depending on:
- Specificity — vague prompts get vague answers
- Context — the model only knows what you tell it
- Format instructions — "reply in JSON" changes the output structure
- Persona — "act as a senior code reviewer" shifts the tone and depth
Prompting is not magic — it's structured communication.
Technique 1: Zero-Shot Prompting
The simplest approach: just ask.
Summarize this article in 3 bullet points.
[article text here]Works well for tasks within the model's training distribution. Fails when the task is unusual or requires specific reasoning steps.
When to use: Simple, well-defined tasks — classification, summarization, translation.
Technique 2: Few-Shot Prompting
Show the model examples of the input/output format you expect.
Classify the sentiment as Positive, Negative, or Neutral.
Text: "This framework is incredibly fast."
Sentiment: Positive
Text: "The documentation is missing too many details."
Sentiment: Negative
Text: "Installation took about 5 minutes."
Sentiment: Neutral
Text: "Deploying was a nightmare."
Sentiment:The model infers the pattern from your examples and applies it to the new input.
When to use: Custom classification, structured extraction, output formatting.
Technique 3: Chain-of-Thought (CoT)
For complex reasoning, ask the model to think step by step before answering.
A user's subscription started on March 1st and lasts 30 days.
Today is April 3rd. Is the subscription active?
Think step by step before answering.Without CoT, models often produce the wrong answer by jumping to a conclusion. CoT forces intermediate reasoning, which dramatically improves accuracy on math, logic, and multi-step problems.
Variant — Zero-Shot CoT: Just append "Let's think step by step."
What is 17 × 24? Let's think step by step.Technique 4: System Prompts & Personas
Most APIs expose a system role — use it to define the model's behavior globally:
{
"role": "system",
"content": "You are a senior TypeScript developer reviewing pull requests. Be concise, point out potential bugs, and suggest idiomatic fixes. Never rewrite correct code."
}A well-written system prompt eliminates the need to repeat instructions in every user message.
Persona examples:
- "You are a strict security auditor..."
- "You are a friendly onboarding assistant for non-technical users..."
- "You are a Socratic tutor — never give answers, only ask guiding questions."
Technique 5: Constrained Output Format
When you need machine-readable output, specify it explicitly:
Extract the following fields from the job posting below and return valid JSON only.
No markdown, no prose — only the JSON object.
Fields: title, company, location, salary_range, remote (boolean)
Job posting:
[paste here]For code generation, specify the language and constraints:
Write a TypeScript function that validates an email address.
Requirements:
- Pure function, no side effects
- Returns { valid: boolean; reason?: string }
- Handle edge cases: empty string, missing @, no TLD
- No external librariesTechnique 6: Role Reversal & Iterative Refinement
Ask the model to critique its own output:
Here is the function you just wrote:
[paste output]
Now review it as a strict code reviewer. What are the edge cases it misses?
What would you change?Then: "Apply those changes."
This iterative loop catches issues that a single-shot prompt misses.
Technique 7: Retrieval-Augmented Generation (RAG) in Prompts
When the model doesn't have the context it needs, paste it in:
Use ONLY the following documentation to answer the question.
If the answer is not in the docs, say "I don't know."
---
[paste relevant docs here]
---
Question: How do I enable SSR caching in this framework?This grounds the model in your specific context and prevents hallucination from training data.
Anti-Patterns to Avoid
| Anti-pattern | Better alternative |
|---|---|
| "Tell me about React" | "Explain React's reconciliation algorithm to someone who knows Vue" |
| "Fix my code" | "Here is my code and the exact error. What is causing this and how do I fix it?" |
| Over-constraining format | Give format example, not a 10-line schema description |
| Asking multiple questions at once | One focused question per prompt |
| Not providing context | Include error messages, stack traces, framework version |
Prompt Template for Code Tasks
A reliable template for development use cases:
Context: [project type, language, framework]
Task: [one specific thing you want]
Constraints: [length, libraries to avoid, style rules]
Output format: [code only / explanation + code / JSON]
Current code (if any):
[paste here]Evaluating Your Prompts
A good prompt produces:
- Correct output on the first try (or within 1-2 iterations)
- Consistent output when run multiple times (low variance)
- Scoped output — the model doesn't add unrequested content
If any of these fail, tighten specificity, add examples, or break the task into smaller sub-prompts.
Prompt engineering is a craft. Keep a library of prompts that work, note what changed when they fail, and iterate. The models improve every few months — revisit your templates regularly.