Start Lesson
Your AI-powered customer support bot just told a customer they are entitled to a full refund under a policy that does not exist. The customer screenshots it and posts it on Twitter. Your legal team is calling. Your CEO wants to know who approved this. You need to honor the commitment and figure out how this happened -- but more importantly, you need to make sure it never happens again.
This is not a hypothetical. Variants of this scenario have happened to Air Canada, a New York law firm citing fake cases, and dozens of companies whose AI hallucinations became public embarrassments. This lesson gives you a risk register and guardrail framework so you identify and mitigate these risks before deployment, not after.
A completed AI Risk Register scoring each risk by likelihood and severity for your specific deployment plan, plus a one-page guardrails policy you can share with your team on day one.
LLMs generate text that sounds confident even when it is completely fabricated. They do not know facts -- they predict the most likely next token.
| Aspect | Details | |--------|---------| | Business impact | AI cites a nonexistent refund policy, invents a product feature, or fabricates a statistic. You are now liable for the claim. | | Likelihood | High. Every LLM hallucinates. The rate varies by task -- factual recall is worse than creative writing. | | Mitigation | Human review before any AI output reaches customers. Use RAG (retrieval-augmented generation) to ground responses in your verified data. Never let AI make financial, medical, or legal claims without human sign-off. |
When you send data to an AI provider, it leaves your control. Where it goes, who sees it, and whether it trains the model depends on the provider's policy.
| Aspect | Details | |--------|---------| | Business impact | Employee pastes a confidential client contract into ChatGPT. Support agent's AI tool logs all customer conversations to a third-party server. Trade secrets in training data. | | Likelihood | High without policies. Medium with clear data classification and training. | | Mitigation | Read the data policy of every AI tool (storage, training, retention). Create a data classification list: approved vs. off-limits for AI. Use enterprise tiers with data isolation for sensitive workflows. Never send PII, trade secrets, or financial records without explicit acceptance of the provider's terms. |
Models reflect biases in training data. Outputs may systematically disadvantage certain groups.
| Aspect | Details | |--------|---------| | Business impact | AI screening tool deprioritizes qualified candidates from certain backgrounds. Marketing copy defaults to stereotypical language. Lead scoring penalizes viable customers based on biased patterns. | | Likelihood | Medium. Higher for hiring, lending, pricing, and any task affecting people's opportunities. | | Mitigation | Audit AI outputs regularly for protected categories. Test prompts with diverse scenarios. Keep a human in the loop for any decision affecting employment, credit, pricing, or access. |
Depending on your industry, automated decisions may trigger regulatory requirements you did not plan for.
| Aspect | Details | |--------|---------| | Business impact | GDPR (EU) requires disclosure of automated decision-making and a right to human review. HIPAA (healthcare) restricts processing of patient data. SOC 2 requires documentation of data handling in automated systems. CCPA (California) gives consumers rights over data used in profiling. Financial regulations require explainability for credit and lending decisions. | | Likelihood | Medium to high for regulated industries. Low for internal productivity tools. | | Mitigation | Consult legal counsel before deploying AI in any regulated area. Document how AI systems work, what data they use, and who is accountable. Build the ability to explain any AI-driven decision in plain language. Maintain audit logs. |
Your AI vendor raises prices 300%, gets acquired, or sunsets the product. Your workflows break.
| Aspect | Details | |--------|---------| | Business impact | Critical workflows go offline. Switching costs are high because prompts, configurations, and integrations are vendor-specific. | | Likelihood | Medium. AI tool market is volatile -- pricing changes, acquisitions, and shutdowns are common. | | Mitigation | Maintain export capability for all configurations. Keep prompt libraries in your own documentation (not just in the vendor platform). Test fallback workflows quarterly. Get pricing commitments in writing. |
Team stops thinking critically about AI outputs. Quality declines because nobody is checking the work.
| Aspect | Details | |--------|---------| | Business impact | AI-generated reports go out with errors nobody catches. Team loses the ability to do the task manually. When the AI fails, nobody knows how to recover. | | Likelihood | Medium. Increases over time as AI becomes routine. | | Mitigation | Mandatory review processes that cannot be skipped. Periodic "manual days" where team does tasks without AI. Track error rates monthly -- if they are rising, review processes have broken down. |
Score each risk for YOUR planned deployment using this 5-point scale:
Likelihood: 1 = Very unlikely, 2 = Unlikely, 3 = Possible, 4 = Likely, 5 = Very likely
Severity: 1 = Minor inconvenience, 2 = Moderate cost, 3 = Significant damage, 4 = Major financial/reputational harm, 5 = Existential threat (lawsuit, regulatory action, business-ending)
Risk Score = Likelihood x Severity. Maximum: 25.
| Risk | Likelihood (1-5) | Severity (1-5) | Risk Score | Mitigation Plan | Owner | |------|-------------------|-----------------|------------|-----------------|-------| | Hallucination | | | | | | | Data privacy breach | | | | | | | Bias/discrimination | | | | | | | Compliance violation | | | | | | | Vendor dependency | | | | | | | Over-reliance/skill erosion | | | | | |
Interpretation:
| Risk Score | Action Required | |------------|----------------| | 15-25 | Stop. Do not deploy until mitigation is in place and verified. | | 8-14 | Proceed with controls. Mitigation must be active before launch. | | 3-7 | Monitor. Acceptable risk with standard review processes. | | 1-2 | Accept. Log and revisit quarterly. |
Before any AI workflow goes live, answer these five questions. If you cannot answer all five with confidence, pause the deployment.
| Question | Your Answer | If "No" | |----------|-------------|---------| | Who is affected? Customers, employees, partners, candidates -- everyone who interacts with or is impacted by this AI. | | Identify all affected parties before proceeding. | | What happens when it is wrong? Define the worst case: embarrassing, costly, harmful, or illegal? | | The answer determines how much human oversight you need. | | Can we explain how it works? If a customer or regulator asks, can you give a clear, honest answer? | | You are not ready to deploy. | | Can we turn it off? If AI starts producing bad results, can you switch to manual immediately? | | Build a kill switch and fallback process first. | | Are we being transparent? Do affected people know they are interacting with AI? | | Hiding AI involvement erodes trust. Disclose. |
You do not need a 50-page AI ethics document to start. You need one page with five sections:
1. Approved uses. List the specific tasks AI may be used for. Example: "AI may draft customer emails, generate social media captions, and summarize meeting notes."
2. Prohibited uses. List what AI may not do. Example: "AI may not access production databases, make hiring decisions, generate financial projections for external use, or process customer PII without enterprise-tier data isolation."
3. Review requirements. Who reviews AI output before it reaches customers? Example: "All customer-facing AI output must be reviewed and approved by a team member before sending."
4. Data rules. What data can and cannot be sent to AI tools? Example: "No customer PII, trade secrets, financial records, or confidential client data may be included in AI prompts unless using our enterprise tool with data isolation."
5. Incident process. What happens when AI produces a harmful output? Example: "Report to AI Champion within 1 hour. AI Champion documents the incident, pauses the workflow if needed, and updates guardrails within 48 hours."
Two deliverables:
Deliverable 1: Fill in the Risk Register above for your planned Phase 1 and Phase 2 deployments from Lesson 4. Score every risk. If any score is 15+, write out the specific mitigation before you proceed.
Deliverable 2: Write your one-page guardrails policy using the five-section template above. Keep it specific to YOUR business -- not generic. This document should be shareable with your entire team by end of day.
Your output should have: 6 scored risks with mitigation plans, and a 1-page policy document. Revisit both quarterly as your AI usage grows.
You have the opportunity list, the implementation path, the ROI projection, the rollout timeline, and now the risk mitigation plan. One piece remains: who actually does the work? Lesson 6 covers the roles you need, when to hire versus outsource, market rates for AI talent, and a vendor evaluation scorecard. Bring your roadmap from Lesson 4 -- you will map roles to each phase.