Personal Dev Risk¶
10 cards — 🟢 3 easy | 🟡 4 medium | 🔴 3 hard
🟢 Easy (3)¶
1. What is a risk matrix and what are its two axes?
Show answer
A risk matrix plots risks on a grid with likelihood (probability) on one axis and impact (severity) on the other. Each risk gets placed in a cell, making it easy to compare and prioritize. High-likelihood, high-impact risks demand immediate attention. Low-likelihood, low-impact risks may be accepted. The matrix forces explicit assessment instead of gut-feel prioritization.2. What is a pre-mortem and how does it differ from a post-mortem?
Show answer
A pre-mortem is conducted BEFORE a project starts: the team imagines it has already failed and works backward to identify what went wrong. A post-mortem examines failure after it happens. The pre-mortem's advantage is that it bypasses optimism bias — once you accept failure as a premise, people freely identify risks they would otherwise suppress to avoid seeming negative.3. What is the Swiss cheese model of accident causation?
Show answer
James Reason's Swiss cheese model says that systems have multiple layers of defense (like slices of Swiss cheese), each with holes (weaknesses). An accident happens only when holes in multiple layers align, allowing a hazard to pass through all defenses. No single failure causes a disaster — it takes a chain of failures. This model shifts focus from blaming individuals to fixing systemic weaknesses in each layer.🟡 Medium (4)¶
1. What is a black swan event and why do traditional risk methods handle it poorly?
Show answer
A black swan (Nassim Taleb's term) is an event that is rare, has extreme impact, and is rationalized as predictable only in hindsight. Traditional risk methods handle it poorly because they rely on historical data and normal distributions, which underestimate tail events. Risk matrices cap impact at "high" when the real question is whether an event could be catastrophic. Defense: build resilience and optionality rather than trying to predict specific black swans.2. What is normalization of deviance and why is it dangerous?
Show answer
Normalization of deviance (Diane Vaughan's term from the Challenger disaster) is the gradual process by which unacceptable practices become acceptable because "nothing bad happened last time." Each small violation without consequence resets the baseline of what is considered normal. It is dangerous because the risk accumulates invisibly — the system is drifting toward failure while everyone perceives stability. Detection requires actively comparing current practice to the written standard, not just to recent habit.3. How do you calculate expected value and why is it useful for risk decisions?
Show answer
Expected value = the sum of (probability x impact) for each possible outcome. Example: a project has a 70% chance of gaining $100K and a 30% chance of losing $50K. EV = (0.7 x 100K) + (0.3 x -50K) = $70K - $15K = $55K. It is useful because it converts uncertain outcomes into a single comparable number, preventing both undue risk aversion on positive-EV bets and reckless gambling on negative-EV ones. Limitation: it ignores variance and ruin risk.4. What is the difference between risk appetite and risk tolerance?
Show answer
Risk appetite is the broad level of risk an organization is willing to accept in pursuit of its objectives — a strategic, high-level stance (e.g., "we are aggressive on market risk"). Risk tolerance is the specific, measurable boundary around that appetite (e.g., "we will not risk more than $2M on any single initiative"). Appetite is direction; tolerance is the guardrail. Without explicit tolerance, appetite is just a vague platitude.🔴 Hard (3)¶
1. How does defense in depth work and why is a single control insufficient?
Show answer
Defense in depth uses multiple independent layers of protection so that if one fails, others still block the hazard. Layers include prevention (stop it from happening), detection (notice it quickly), mitigation (reduce impact), and recovery (restore normal state). A single control is insufficient because every control has failure modes — human error, edge cases, maintenance gaps. The question is not whether a control can fail, but what catches the failure when it does.2. Why are near misses the most valuable and most wasted signal in risk management?
Show answer
Near misses reveal the same systemic weaknesses that cause real accidents, but without the damage — they are free lessons. They are wasted because: (1) success bias — "nothing bad happened" discourages investigation, (2) reporting friction — near misses are underreported if blame culture exists, (3) they lack the emotional urgency of actual failures. Organizations that systematically collect and analyze near misses find and fix vulnerabilities before they produce real harm.3. What is optimism bias in risk management and what structural defenses exist against it?