Start Lesson
Here is a prompt that gets a confident, wrong answer:
What is 15% of the annual revenue if monthly revenue is
$47,000 and it grows 3% each month?
The AI responds: "$84,600." It sounds right. It is wrong. The model jumped straight to a calculation without accounting for compound monthly growth, and the error is invisible because it never showed its work.
Now the same question with one structural change:
What is 15% of the annual revenue if monthly revenue starts
at $47,000 and grows 3% each month?
Think step by step:
1. Calculate each month's revenue (Month 1 = $47,000,
Month 2 = $47,000 x 1.03, and so on)
2. Sum all 12 months for the annual total
3. Take 15% of that total
Show your work for each step.
The AI now lists every month: $47,000... $48,410... $49,862... all the way through Month 12. It sums them to $672,484. Then it takes 15%: $100,873. You can check each step. The answer is verifiable.
In Lessons 1 and 2, you learned CGC structure and role prompting. This lesson adds the technique that makes AI reliable on reasoning tasks: forcing it to show its work.
After this lesson, you will be able to: use chain-of-thought prompting to get accurate, verifiable answers on math, logic, and multi-step analysis -- and use few-shot examples to teach the AI your exact output format.
When you ask for a final answer directly, the model predicts the most likely conclusion in one jump. When you ask it to reason step by step, each step becomes context for the next one. Errors that would compound silently in a single jump get caught because each intermediate result is visible.
Research confirms this: adding "think step by step" to complex prompts measurably improves accuracy on math, logic, and multi-step reasoning across Claude, GPT-4, and Gemini.
The key insight: you are not just asking the AI to explain its answer. You are changing how it computes the answer. The reasoning steps are not decoration -- they are part of the computation.
Use this whenever the task involves numbers, comparisons, or multi-step decisions.
Act as a [role] analyzing [situation].
Before giving your final answer, work through these steps:
1. Identify the key variables and what we know about each
2. State your assumptions explicitly
3. Work through the analysis step by step, showing calculations
4. Sanity-check your answer -- does it pass a common-sense test?
5. Give your final recommendation in 2-3 sentences
[Your specific question here]
Expected output: A structured walkthrough where you can verify each step. If the AI's assumption in Step 2 is wrong, you catch it immediately instead of getting a polished wrong answer. The sanity check in Step 4 catches errors like "the market grew 400% year over year" that the model might otherwise present as fact.
Use this when you need a specific output format, or when the AI keeps misunderstanding what you want. Instead of describing the format, show it.
Classify customer emails by urgency: critical, standard,
or low-priority.
Example 1:
Email: "Our entire team is locked out of the platform and
we have a client presentation in 2 hours."
Urgency: Critical
Reason: Production blocker with immediate business impact.
Example 2:
Email: "The export button sometimes takes 30 seconds to load."
Urgency: Standard
Reason: Functional issue, not blocking core workflows.
Example 3:
Email: "Can you update the color of our dashboard header?"
Urgency: Low-priority
Reason: Cosmetic preference, no functional impact.
Now classify:
Email: "[Paste the actual email here]"
Urgency:
Reason:
Expected output: A classification that matches your exact format -- urgency label plus one-line reason. Because you showed three examples with clear reasoning, the AI understands your criteria for "critical" vs. "standard" vs. "low-priority." Without examples, "critical" means something different to every model run.
Few-shot prompting is especially powerful when the task involves subjective judgment. Your examples define "correct" -- not the AI's default interpretation.
This stacks everything you have learned in the course so far into one template.
Act as a [role] with expertise in [domain].
[CONTEXT]
[2-3 sentences about the situation, audience, and stakes]
[GOAL]
Analyze [the specific question] and give me a recommendation.
[CHAIN OF THOUGHT]
Before answering, work through this:
1. What are the 3 most important factors in this decision?
2. For each factor, what does the evidence suggest?
3. What is the strongest argument AGAINST your recommendation?
4. Given all of the above, what do you recommend and why?
[CONSTRAINTS]
Tone: [tone]. Length: under [number] words.
Format the final recommendation as a single paragraph
preceded by the step-by-step analysis.
Expected output: A structured analysis where you can see the reasoning, followed by a clear recommendation. Step 3 -- arguing against its own recommendation -- is the technique that prevents the AI from simply confirming whatever direction it picked first.
This template works for vendor evaluations, strategic decisions, hiring assessments, and any situation where you need a justified recommendation, not just an opinion.
Use chain-of-thought when:
Skip it when:
Adding CoT to a simple task wastes tokens without improving quality. Adding it to a complex task can be the difference between a useful analysis and an expensive mistake.
Take a real decision or calculation from your work. Use Template 1 (Structured Reasoning Prompt) with this exact setup:
Act as a senior analyst reviewing a business decision.
Before giving your final answer, work through these steps:
1. Identify the key variables and what we know about each
2. State your assumptions explicitly
3. Work through the analysis step by step
4. Sanity-check your answer against common sense
5. Give your final recommendation in 2-3 sentences
[Paste your real question here -- a budget allocation, a
vendor comparison, a project timeline estimate, etc.]
Check your output: Look at Step 2 (assumptions). If any assumption is wrong, correct it and re-run. This is the power of chain-of-thought: the reasoning is visible, so the errors are fixable. A one-shot answer hides its assumptions from you.
You now have three core techniques: CGC structure, role prompting, and chain-of-thought reasoning. In the next lesson, you will apply all three to the most common writing tasks -- emails, content, proposals -- with copy-paste templates you can use the same day.