Start Lesson
You ask ChatGPT to write a thank-you email to a client. Ten seconds later, you have three polished paragraphs that sound exactly like something you would write — except you did not write them. It feels like the machine understood your request. Like it thought about the right tone and word choice.
It did not. What actually happened is far simpler, and understanding it will change how you use every AI tool from this point forward.
After this lesson, you will be able to explain how AI generates text — in one sentence, to anyone — and predict when it will perform well versus when it will fail.
You already use a simple version of AI every day. When you type "See you" on your phone, it suggests "tomorrow" or "later" or "soon." Your phone learned those patterns from millions of text messages.
Now imagine that same autocomplete, but instead of learning from text messages, it learned from the entire internet — every book, every article, every forum post, every Wikipedia page. And instead of predicting one word, it predicts entire paragraphs and essays.
That is a Large Language Model, or LLM. Autocomplete on steroids.
Every AI system you have used — ChatGPT, Claude, Gemini, all of them — does one thing at its core: it predicts the next word. No thinking. No understanding. No consciousness behind the screen. Just prediction, billions of times per second, stitched together into something that looks remarkably intelligent.
Here is the training process in plain English:
Step 1: Read everything. The model ingests billions of pages of text. Books, news, Reddit threads, legal documents, recipes, academic papers.
Step 2: Find patterns. It notices things like: after "The capital of France is," the word "Paris" appears almost every time. After "Dear hiring manager," a certain style of language follows. It builds a detailed map of how language works — not what language means, but how words tend to follow other words.
Step 3: Practice predicting. The model is shown the beginning of a sentence and asked to guess what comes next. When it gets it wrong, the guess is adjusted. This happens billions of times until it becomes extraordinarily good at predicting plausible text in almost any context.
There is no step where the model "learns to think." It becomes a world-class pattern matcher. And world-class pattern matching produces behavior that looks a lot like intelligence.
Once you hold this mental model — AI is prediction, not thinking — a lot of confusing AI behavior clicks into place.
Why AI is great at writing emails: It has seen millions of emails. It knows the patterns cold. Predicting what comes next in an email is exactly what it trained for.
Why AI is terrible at math: "What is 7,849 times 3,271?" is not a pattern you can predict from reading text. The model is not calculating — it is predicting what a correct-looking answer looks like. Sometimes it gets close. Sometimes it is wildly wrong.
Why AI sometimes makes things up: If you ask about a niche topic, the model may not have a strong pattern to follow. So it predicts something plausible — a court case that sounds real, with a real-sounding citation — because that is what the pattern of "answering a legal question" looks like. This is called a "hallucination," and we will dig into it in the next lesson.
Why the quality of your input matters so much: Vague input gives the model too many plausible directions to predict. Specific input narrows the prediction to something useful. "Write something about marketing" could go anywhere. "Write a 100-word LinkedIn post announcing our new bakery location in Coral Gables, targeting local families" gives the prediction engine a clear lane.
Understanding that AI is a prediction engine — not a thinking engine — gives you two practical rules:
Trust AI with patterns. Drafting, summarizing, reformatting, translating. These are prediction-friendly tasks where the model has seen millions of examples.
Verify AI on facts. Specific numbers, citations, anything that requires looking something up rather than predicting what looks right. The model does not know things the way you know things. It predicts what a correct answer looks like, which is not the same as being correct.
AI is not going to replace your judgment. It is going to give you a very good first draft that still needs a human to verify, edit, and approve.
Pick a task you did at work this week — an email, a report, a spreadsheet, a meeting summary. Ask yourself: "Is this task mostly about following a pattern (like drafting), or mostly about being factually precise (like calculating)?"
If the answer is "pattern," AI will probably help. If the answer is "precision," you will need to check its work carefully. If it is both, AI can handle the pattern part while you handle the precision part.
That single question — pattern or precision? — is the most useful filter for deciding when to use AI. You will build on it throughout this course.
Now that you know AI is a prediction engine, the natural question is: what does it predict well, and where does it fall apart? In the next lesson, we will map exactly where AI excels, where it consistently fails, and why hallucinations are a built-in consequence of how prediction works.