Skip to content

Personal Dev Thinking

← Back to all decks

84 cards — 🟢 25 easy | 🟡 34 medium | 🔴 25 hard

🟢 Easy (25)

1. What is confirmation bias and why is it especially dangerous during incident response?

Show answer Confirmation bias is the tendency to search for and interpret evidence that supports your existing hypothesis while filtering out contradictory evidence. During incident response, it causes you to latch onto the first plausible cause and interpret ambiguous signals as confirmation, delaying root cause identification. Countermeasure: before investigating, write down what evidence would disprove your hypothesis and search for that first.

2. What is anchoring bias and how does it distort engineering estimates?

Show answer Anchoring is when the first number you encounter disproportionately influences your estimate. If someone says a migration will take 6 months, all subsequent estimates gravitate toward that number regardless of its basis. Countermeasure: generate your estimate independently before hearing others, and use reference class forecasting -- how long did similar projects actually take?

3. What is the availability heuristic and how does it skew engineering priorities?

Show answer The availability heuristic is overweighting information that is easy to recall -- recent events, dramatic failures, or personal experiences. A spectacular outage that made Hacker News gets more mental weight than a slow memory leak costing 10x more over a year. Countermeasure: check the data rather than your memory, and ask whether you are giving something weight because it is important or because it is vivid.

4. What is a decision journal and why does it separate process from outcome?

Show answer A decision journal records the situation, options, reasoning, assumptions, confidence level, and success/failure criteria before you act. It separates process from outcome because good decisions can have bad outcomes (bad luck) and bad decisions can have good outcomes (good luck). Without a journal, outcome bias rewrites your memory of the reasoning. The journal preserves what you actually knew and thought at the time.

5. What is the difference between reversible and irreversible decisions, and how should you treat each?

Show answer Reversible decisions (Type 2, two-way doors) can be undone at low cost — feature flags, A/B tests, trying a new CI tool. These deserve bias toward action: decide and iterate. Irreversible decisions (Type 1, one-way doors) are costly or impossible to reverse — deleting production data, public disclosure, firing someone. These deserve careful analysis, broad input, and a pre-mortem. The trap is treating every decision like Type 1, causing analysis paralysis.

6. Why should decisions be evaluated based on future costs and benefits only, ignoring past investment?

Show answer Past investment is gone regardless of what you decide next. The sunk cost fallacy tricks you into continuing a failing path because of prior time or money spent. The correct frame: "If I were starting fresh today with no prior investment, would I choose this path?" If no, the prior investment is irrelevant. The emotional pull of sunk costs is strong, so this question must be asked explicitly and honestly.

7. What is the Eisenhower Matrix and why do most people spend too much time in the "urgent but not important" quadrant?

Show answer The Eisenhower Matrix sorts tasks by urgency and importance into four quadrants: Q1 (urgent + important) — do now, Q2 (not urgent + important) — schedule and protect, Q3 (urgent + not important) — delegate or batch, Q4 (not urgent + not important) — eliminate. Most people live in Q3 because urgency creates a false sense of importance — responding to every Slack ping, attending every meeting, fighting every small fire. Q2 (strategic work, learning, relationship building, system improvements) is where long-term career leverage lives, but it never feels urgent enough to prioritize.

8. What is the Dunning-Kruger effect and what does the research actually show?

Show answer The Dunning-Kruger effect is the finding that people with low skill in a domain tend to overestimate their ability, while experts tend to slightly underestimate theirs. The mechanism: competence in a domain is required to evaluate competence in that domain. Beginners lack the knowledge to recognize what they do not know. Important nuance: it does not mean "stupid people think they are smart." It means everyone is poorly calibrated in domains where they lack experience. The cure is deliberate self-testing and external feedback.

9. What does good metacognitive calibration look like in practice?

Show answer Good calibration means your confidence matches your actual performance — you know what you know and know what you do not. Practically: (1) you can predict your test scores within a few points, (2) you say "I am not sure" when you genuinely are not, (3) you do not confuse familiarity with understanding, (4) you can identify which parts of a topic you are strong in and which are weak. Poor calibration looks like: feeling certain about wrong answers, being surprised by failure, or studying topics you already know while ignoring gaps.

10. Why is self-testing the most reliable metacognitive tool?

Show answer Self-testing forces you to actually retrieve information or perform a skill, revealing the real state of your knowledge rather than what feels familiar. Re-reading creates an illusion of mastery — the text looks familiar so you feel like you know it. Self-testing breaks this illusion by exposing gaps. It serves double duty: it strengthens memory (the testing effect) AND provides accurate diagnostic information about what you actually know. No other study method simultaneously builds knowledge and monitors it.

11. Why does a vague task almost never get started, and what makes a valid next action?

Show answer A vague task like "Study Kubernetes" is actually a project or goal, not a task. The brain stalls because there is no clear first move. A valid next action meets four criteria: it is physical and visible ("open file X," "type this command"), requires zero decisions before starting, can be done in under 15 minutes, and you can picture yourself doing it right now. If a task starts with "figure out" or "think about," it is not decomposed enough.

12. What are implementation intentions (if-then plans) and why are they more effective than goals alone?

Show answer Implementation intentions link a specific situation to a specific action: "If it is 7 AM and I sit at my desk, then I will open the Terraform module I was working on." Research shows this format doubles follow-through rates compared to simple goals ("I will study Terraform"). It works because the "if" cue triggers the action automatically, bypassing the decision point where avoidance normally wins. Pre-decide the when, where, and first action.

13. What is the two-minute ignition protocol for overcoming task initiation failure?

Show answer When you cannot start: (1) Define the outcome in one sentence, (2) Identify the first physical action ("open values-dev.yaml"), (3) Set a timer for 15 minutes (making the task finite), (4) Start with no editing or planning, (5) At timer expiry, decide to continue or stop. This bypasses the prefrontal cortex's tendency to evaluate and predict outcomes, which is where avoidance lives. Most of the time you will continue -- the hard part was the first 2 minutes.

14. What is the prisoner's dilemma and what is its core lesson?

Show answer Two players each choose to cooperate or defect. If both cooperate, both do well. If one defects while the other cooperates, the defector wins big and the cooperator loses. If both defect, both do poorly. The core lesson: individually rational behavior (defecting) produces collectively worse outcomes than mutual cooperation. This pattern appears in arms races, price wars, and everyday trust situations.

15. What is the difference between a zero-sum and a positive-sum game?

Show answer In a zero-sum game, one player's gain is exactly another's loss — the total value is fixed (poker, territory disputes). In a positive-sum game, cooperation can increase total value so all players gain (trade, knowledge sharing, most workplace collaborations). Treating positive-sum situations as zero-sum leads to unnecessary conflict and wasted opportunity. Most real-world interactions have positive-sum potential.

16. What is the tragedy of the commons and what causes it?

Show answer The tragedy of the commons occurs when individuals acting in self-interest deplete a shared resource, even though it harms everyone. Each person captures the full benefit of using the resource but shares only a fraction of the cost of depletion. Examples: overfishing, shared-nothing team documentation, cloud cost overruns. It is caused by the mismatch between individual incentives and collective welfare, especially when enforcement or coordination is weak.

17. What is the base rate fallacy and why does it matter in everyday reasoning?

Show answer The base rate fallacy is ignoring how common or rare something is in the overall population when evaluating a specific case. Example: a medical test with 99% accuracy still produces mostly false positives if the disease affects only 1 in 10,000 people, because the base rate of the disease is so low that false positives vastly outnumber true positives.

18. What is the difference between correlation and causation, and what is the simplest test to separate them?

Show answer Correlation means two things move together; causation means one actually produces the other. Things can correlate for dumb reasons: a hidden third variable, coincidence, or reverse direction. The simplest test: ask whether there is a plausible mechanism, whether the timing makes sense, and whether a controlled experiment has been done. Without those, correlation is just a pattern, not an explanation.

19. What is sampling bias and how does it distort conclusions?

Show answer Sampling bias occurs when the sample studied does not represent the population you want to draw conclusions about. Common forms: convenience sampling (studying whoever is easiest to reach), survivorship bias (studying only successes), and voluntary response bias (only motivated people respond). The math can be perfect, but if the sample is skewed, the conclusion is skewed.

20. What is a risk matrix and what are its two axes?

Show answer A risk matrix plots risks on a grid with likelihood (probability) on one axis and impact (severity) on the other. Each risk gets placed in a cell, making it easy to compare and prioritize. High-likelihood, high-impact risks demand immediate attention. Low-likelihood, low-impact risks may be accepted. The matrix forces explicit assessment instead of gut-feel prioritization.

21. What is a pre-mortem and how does it differ from a post-mortem?

Show answer A pre-mortem is conducted BEFORE a project starts: the team imagines it has already failed and works backward to identify what went wrong. A post-mortem examines failure after it happens. The pre-mortem's advantage is that it bypasses optimism bias — once you accept failure as a premise, people freely identify risks they would otherwise suppress to avoid seeming negative.

22. What is the Swiss cheese model of accident causation?

Show answer James Reason's Swiss cheese model says that systems have multiple layers of defense (like slices of Swiss cheese), each with holes (weaknesses). An accident happens only when holes in multiple layers align, allowing a hazard to pass through all defenses. No single failure causes a disaster — it takes a chain of failures. This model shifts focus from blaming individuals to fixing systemic weaknesses in each layer.

23. What is the CRAAP test for evaluating information sources?

Show answer CRAAP stands for Currency (when was it published/updated?), Relevance (does it relate to your question?), Authority (who is the author and what are their credentials?), Accuracy (is it supported by evidence, can it be verified?), and Purpose (why does this exist — to inform, persuade, sell, entertain?). It is a quick checklist for deciding whether a source deserves trust before you act on its claims.

24. What are filter bubbles and how do they distort perception of reality?

Show answer Filter bubbles are algorithmic environments that show you content matching your existing preferences, beliefs, and behavior. They distort perception by making your views seem more popular and universal than they are, hiding opposing evidence, and gradually narrowing the range of information you encounter. The danger is not that you see agreeable content — it is that you stop noticing what you are not seeing.

25. What are three common propaganda techniques that use emotion instead of evidence?

Show answer (1) Appeal to fear — exaggerating threats to bypass rational evaluation. (2) Bandwagon — implying "everyone agrees" to pressure conformity. (3) Loaded language — using emotionally charged words to trigger reactions before analysis ("invasion" vs "migration" vs "arrival" describe the same event differently). All three work by activating emotional responses that short-circuit careful evaluation of the actual claim.

🟡 Medium (34)

1. What is the sunk cost fallacy and how does it show up in long-running engineering projects?

Show answer The sunk cost fallacy is continuing to invest in a failing approach because of prior investment. It sounds like "We spent 8 months on this rewrite, we cannot stop now." The 8 months are gone regardless of your next decision. Countermeasure: evaluate decisions based on future costs and benefits only. Ask: if starting fresh today, would I choose this path?

2. What is steelmanning and how does it differ from strawmanning?

Show answer Steelmanning is constructing the strongest possible version of an argument you disagree with, then critiquing that version. It is the opposite of strawmanning, which attacks a weak caricature. Process: state the opposing view, find the strongest evidence for it, identify what must be true for it to be correct, then critique that strongest version. This produces better analysis and earns credibility.

3. What is the core habit of Bayesian thinking in practice, without formal math?

Show answer Start with a prior belief (how likely something is before new evidence), observe evidence, then update the belief proportionally to how surprising the evidence is. Strong priors need strong evidence to move; weak priors should update quickly. The anti-pattern is anchoring on the prior and refusing to update. When evidence conflicts with your prior, the correct response is to update, not to dismiss the evidence.

4. What are the five claim types and why does identifying them matter before evaluating an argument?

Show answer The five types are factual (requires measurement), causal (requires timeline plus mechanism plus exclusion of alternatives), predictive (requires model assumptions and sensitivity analysis), normative (requires values and tradeoff analysis), and definitional (requires agreed criteria). Most arguments go sideways because participants argue different claim types without realizing it -- one makes a causal claim while the other contests the normative implication.

5. What belongs in a pre-mortem and how does it surface risks that optimism suppresses?

Show answer A pre-mortem assumes the decision has already failed: "It is six months from now. This project failed completely. Write the failure story." List 3-5 failure modes, identify the most likely one, note early warning signs, and define mitigations you can implement now. It works because asking "what could go wrong?" invites optimism, while "tell me why it failed" gives social permission to voice doubts honestly.

6. What does "externalizes" mean in tradeoff analysis and why is it the most dangerous column?

Show answer Externalizes is the cost you push onto someone else: a different team, future engineers, users, the on-call rotation. It is the most dangerous column because externalized costs are invisible to the decision-maker but very real to the person who absorbs them. Skipping integration tests externalizes debugging cost to on-call. Deferring tech debt externalizes complexity to future engineers. Always ask: who bears the cost of this choice?

7. What is the difference between satisficing and maximizing, and when is satisficing the better strategy?

Show answer Maximizing means seeking the absolute best option by evaluating all alternatives. Satisficing means defining "good enough" criteria and choosing the first option that meets them. Satisficing is better for reversible decisions, time-constrained choices, and decisions with diminishing returns from further research. Maximizers report more decision fatigue and less satisfaction. For most engineering decisions, a "good enough" choice made quickly outperforms a "perfect" choice made too late.

8. What is the regret minimization framework and when should you apply it?

Show answer Ask: "When I am 80 years old, which choice will I regret more — doing this or not doing this?" It works best for Type 1 (irreversible) life decisions where analytical frameworks produce ambiguous results: career changes, starting something new, taking risks. It cuts through spreadsheet analysis by accessing your deeper values. It is less useful for routine operational decisions where data-driven analysis is more appropriate.

9. What is confirmation bias and what are practical techniques to counteract it in technical decisions?

Show answer Confirmation bias is the tendency to seek, interpret, and remember information that confirms your existing beliefs while ignoring contradictory evidence. In engineering: if you believe the database is the bottleneck, you will find evidence supporting that and miss network latency data. Techniques: 1) Assign a "red team" member to argue the opposite position. 2) Write down your hypothesis and what evidence would disprove it before investigating. 3) Seek disconfirming evidence first. 4) Ask "what would have to be true for the alternative to be correct?" 5) Use structured decision matrices instead of gut feeling.

10. What is anchoring bias and how does it affect engineering estimates and incident response?

Show answer Anchoring bias causes over-reliance on the first piece of information encountered. In estimation: if someone says "this should take two weeks," subsequent estimates cluster around that number regardless of actual complexity. In incidents: the first hypothesis anchors investigation, and the team tunnels on it even when evidence points elsewhere. Countermeasures: 1) Have each person estimate independently before sharing. 2) During incidents, explicitly ask "what if our current theory is completely wrong?" every 15 minutes. 3) Use reference class forecasting (how long did similar past projects actually take?) instead of intuition.

11. What is the illusion of explanatory depth and how do you detect it in yourself?

Show answer The illusion of explanatory depth is the belief that you understand something deeply when you actually have only a shallow, surface-level understanding. Test it by trying to explain the mechanism in detail: "How does a zipper work?" Most people say "I know that" until asked to explain step by step, then realize their understanding has gaps. Detect it by: (1) attempting to explain without notes, (2) asking yourself "could I teach this?", (3) writing out the causal chain. If you stall or get vague, the depth was illusory.

12. How does journaling function as a metacognitive tool, and what distinguishes useful journaling from venting?

Show answer Journaling externalizes thinking so you can inspect it — you catch assumptions, notice patterns, and evaluate your own reasoning in a way that pure introspection cannot. Useful journaling: (1) records what you tried, what happened, and what you would change, (2) identifies repeated errors or emotional patterns, (3) forces clarity by requiring sentences instead of vague feelings. Venting is cathartic but not metacognitive — it expresses emotion without analyzing the process that produced it. The distinction: useful journaling asks "what can I learn?" not just "how do I feel?"

13. What are the three phases of metacognition — planning, monitoring, and evaluation — and why do most people skip the middle one?

Show answer Planning: setting goals, choosing strategies, estimating difficulty before starting. Monitoring: checking progress during the task, noticing confusion, adjusting approach in real time. Evaluation: reviewing what worked, what failed, and what to change after the task. Most people skip monitoring because: (1) task engagement consumes all attention, (2) stopping to check feels like it slows you down, (3) people assume they will notice problems automatically (they often do not). Monitoring is the most valuable phase because it catches errors while they are still cheap to fix.

14. What is metacognitive strategy selection and why do people default to familiar methods?

Show answer Strategy selection is choosing the right approach for the task — reading vs doing, focused practice vs broad exploration, memorization vs understanding. People default to familiar methods because: (1) switching strategies has a startup cost, (2) familiar methods feel productive even when they are not, (3) evaluating strategy effectiveness requires metacognitive skill, which is the thing being developed. The fix: before starting, ask "what kind of task is this?" and "is my usual approach the best fit?" If effort is high but progress is low, the strategy — not the effort level — is usually the problem.

15. What is body doubling and why does the presence of another person help with task initiation?

Show answer Body doubling means working in the presence of another person who is also working -- either physically or virtually (video call, co-working space). It helps because social presence creates mild accountability without explicit monitoring, the other person's focused work provides behavioral cues that prime your own focus, and it reduces the isolation that makes avoidance easier. It is especially effective for people who struggle with task initiation when alone.

16. What is environmental scaffolding and how do you design a workspace for task initiation?

Show answer Environmental scaffolding means arranging your physical and digital environment so the right action is the easiest action. Checklist: Is the first action visible when you sit down (file open, note visible)? Are distractions more than 2 clicks away? Is there a physical cue that says "work starts now" (headphones, specific desk)? Is study material already loaded? Is your phone out of reach or in DND? The principle: do not rely on discipline -- make the right thing the default.

17. Why does working memory overflow cause paralysis (not just slowness), and how do you offload?

Show answer Working memory holds roughly 4 items. When you try to hold the task, the plan, the context, the next step, and meta-worry about performance, you overflow. Overflow feels like fog, paralysis, or the urge to do something easier -- not just slower processing. Offload by: writing the plan before executing (even 3 bullet points), using checklists for multi-step procedures, keeping a parking lot for stray thoughts, and single-tasking your screen to reduce visual alternatives.

18. What are transition rituals and why are state transitions between rest and deep work cognitively expensive?

Show answer Your brain operates in states: rest, ramp-up, deep work, cooldown, shutdown. Each transition costs cognitive resources. Without explicit rituals, transitions bleed and you lose time to drift. A ramp-up ritual (2 min: clear desk, read breadcrumb note, set timer, begin) and a cooldown ritual (5 min: write breadcrumb, log confusions, close files) make transitions mechanical rather than willpower-dependent. The breadcrumb note is critical -- it eliminates "where was I?" friction on restart.

19. What is a Nash equilibrium and why can it be collectively bad?

Show answer A Nash equilibrium is a state where no player can improve their outcome by unilaterally changing strategy — everyone is doing the best they can given what everyone else is doing. It can be collectively bad because stable does not mean optimal. In the prisoner's dilemma, mutual defection is a Nash equilibrium even though mutual cooperation would make everyone better off. Systems can get stuck in bad equilibria without external coordination or rule changes.

20. What is the tit-for-tat strategy and why did it win Axelrod's tournament?

Show answer Tit-for-tat starts by cooperating, then copies whatever the other player did last round. It won Robert Axelrod's iterated prisoner's dilemma tournament because it is: (1) nice — never defects first, (2) retaliatory — punishes defection immediately, (3) forgiving — returns to cooperation after punishment, (4) clear — opponents quickly learn how it behaves. It thrives in repeated interactions where reputation matters.

21. What is a Schelling focal point and why does it matter for coordination?

Show answer A Schelling focal point is a solution people converge on without communication because it feels natural, obvious, or culturally salient. Example: if two people must meet in New York without coordinating, many choose Grand Central at noon. Focal points matter because many coordination problems have no objectively correct answer — the best answer is the one others are most likely to pick. Conventions, defaults, and norms function as focal points.

22. What is a commitment device and how does it change strategic behavior?

Show answer A commitment device is a deliberate restriction on your own future choices to make a promise or threat credible. Examples: burning bridges (so retreat is impossible), posting a public deadline (reputation at stake), automatic escalation policies. It works because rational opponents recognize you cannot back down, which changes their calculations. Without commitment devices, threats and promises can be dismissed as cheap talk.

23. What does a p-value actually mean, and what is the most common misconception about it?

Show answer A p-value is the probability of seeing data at least as extreme as what was observed, assuming the null hypothesis is true. It is NOT the probability that the hypothesis is correct. Common misconception: "p = 0.03 means there is a 3% chance the result is due to chance." Wrong. It means if nothing real were happening, you would see data this extreme 3% of the time. A small p-value does not tell you the effect is large or important.

24. What is Simpson's paradox and why is it dangerous?

Show answer Simpson's paradox occurs when a trend that appears in several groups reverses or disappears when the groups are combined. It happens because of unequal group sizes or confounding variables. Example: a treatment can appear better in every subgroup but worse overall if it is disproportionately used in the harder cases. It is dangerous because aggregated data can tell the opposite story from disaggregated data, and both are mathematically correct.

25. What is the difference between a confidence interval and a prediction interval?

Show answer A confidence interval estimates where the true population parameter (like a mean) likely falls. A prediction interval estimates where a single future observation might fall. Prediction intervals are always wider because they include both the uncertainty about the population parameter AND the natural variability of individual data points. Confusing the two leads to overconfident predictions about individual cases.

26. Why is statistical significance not the same as practical importance, and what concept bridges the gap?

Show answer Statistical significance only means the result is unlikely under the null hypothesis — it says nothing about the size or importance of the effect. A massive sample can make a trivially small difference statistically significant. Effect size bridges the gap: it measures how large the difference actually is (e.g., Cohen's d, odds ratio). Always ask: significant AND large enough to matter?

27. What is a black swan event and why do traditional risk methods handle it poorly?

Show answer A black swan (Nassim Taleb's term) is an event that is rare, has extreme impact, and is rationalized as predictable only in hindsight. Traditional risk methods handle it poorly because they rely on historical data and normal distributions, which underestimate tail events. Risk matrices cap impact at "high" when the real question is whether an event could be catastrophic. Defense: build resilience and optionality rather than trying to predict specific black swans.

28. What is normalization of deviance and why is it dangerous?

Show answer Normalization of deviance (Diane Vaughan's term from the Challenger disaster) is the gradual process by which unacceptable practices become acceptable because "nothing bad happened last time." Each small violation without consequence resets the baseline of what is considered normal. It is dangerous because the risk accumulates invisibly — the system is drifting toward failure while everyone perceives stability. Detection requires actively comparing current practice to the written standard, not just to recent habit.

29. How do you calculate expected value and why is it useful for risk decisions?

Show answer Expected value = the sum of (probability x impact) for each possible outcome. Example: a project has a 70% chance of gaining $100K and a 30% chance of losing $50K. EV = (0.7 x 100K) + (0.3 x -50K) = $70K - $15K = $55K. It is useful because it converts uncertain outcomes into a single comparable number, preventing both undue risk aversion on positive-EV bets and reckless gambling on negative-EV ones. Limitation: it ignores variance and ruin risk.

30. What is the difference between risk appetite and risk tolerance?

Show answer Risk appetite is the broad level of risk an organization is willing to accept in pursuit of its objectives — a strategic, high-level stance (e.g., "we are aggressive on market risk"). Risk tolerance is the specific, measurable boundary around that appetite (e.g., "we will not risk more than $2M on any single initiative"). Appetite is direction; tolerance is the guardrail. Without explicit tolerance, appetite is just a vague platitude.

31. What is astroturfing and how can you detect it?

Show answer Astroturfing is the practice of disguising an organized campaign as spontaneous grassroots activity. It manufactures the appearance of widespread public support or opposition. Detection clues: identical talking points across many accounts, new accounts with high activity, coordinated timing of posts, suspiciously polished messaging from "ordinary citizens," and financial or organizational connections between seemingly independent voices. The name is a pun on AstroTurf — fake grass, fake grassroots.

32. What should you know about deepfakes for practical media literacy?

Show answer Deepfakes are AI-generated or AI-manipulated media (video, audio, images) that can convincingly depict people saying or doing things they never did. Practical awareness: (1) seeing is no longer believing — video and audio evidence now requires verification, (2) cheapfakes (simple edits, context manipulation) are far more common than sophisticated deepfakes, (3) check provenance before sharing dramatic media, (4) look for institutional confirmation from multiple independent sources, not just the media itself.

33. What is the difference between an ad hominem fallacy and a genetic fallacy?

Show answer Ad hominem attacks the person making the argument instead of the argument itself ("You failed math, so your budget analysis is wrong"). Genetic fallacy dismisses a claim based on its origin rather than its content ("That idea came from a competitor, so ignore it"). Both are fallacies because the truth of a claim is independent of who says it or where it comes from. However, source credibility can be legitimately relevant for unverifiable claims — the fallacy is in using source as the ONLY rebuttal.

34. What is the Gish gallop technique and how do you counter it?

Show answer The Gish gallop (named after creationist Duane Gish) is flooding an opponent with many weak arguments faster than they can be individually rebutted. Each claim takes seconds to make but minutes to refute. The audience perceives unanswered points as conceded. Counters: (1) name the technique explicitly, (2) pick the strongest two or three claims and dismantle them thoroughly, (3) point out that quantity of claims is not quality of evidence, (4) refuse to chase every point and control the frame.

🔴 Hard (25)

1. What is base rate neglect and why does it matter for monitoring and alerting systems?

Show answer Base rate neglect is ignoring the overall probability of a condition when evaluating a test result. A monitoring alert with a 95% true-positive rate sounds great until the base rate of the condition is 0.1%, meaning most alerts are still false positives. Without considering the base rate, you either drown in false alerts or dismiss real ones. Always ask: how common is this condition in the population being tested?

2. Why does the pre-mortem reframe ("it failed -- tell me why") produce better risk analysis than asking "what could go wrong?"?

Show answer What could go wrong? triggers defensive minimization -- people downplay risks to appear confident. The pre-mortem reframe ("It is six months from now and this project failed completely -- write the story of why") gives social permission to voice doubts because it assumes failure as a given. This surfaces risks that optimism suppresses, and participants generate more specific and honest failure modes.

3. How does groupthink suppress dissent without explicit censorship, and what are three countermeasures?

Show answer Groupthink operates through social pressure: the loudest or highest-status person's opinion becomes the group's opinion, and dissent is suppressed not by force but by the desire to avoid social friction. Three countermeasures: (1) collect opinions independently before group discussion, (2) assign a devil's advocate role that rotates, and (3) explicitly reward disagreement so people know dissent is valued, not punished.

4. What is outcome bias and how does it distort your ability to learn from decisions?

Show answer Outcome bias is judging a decision's quality by its outcome rather than by the reasoning at the time. A good decision with available information can have a bad outcome (unlucky), and a poor decision can succeed (lucky). If you judge only by outcomes, you learn to be lucky rather than thoughtful. To counter it: use a decision journal that records reasoning before outcomes are known, then review whether the process was sound regardless of what happened.

5. What are second-order effects and why do most ethical surprises in engineering hide there?

Show answer First-order effects are what happens directly; second-order effects are what happens because of what happened. Adding detailed logging (first order: better debugging; second order: data retention liability and potential surveillance). Automating a manual process (first order: efficiency; second order: knowledge holder leaves and edge case docs are never written). Surface them by asking: "And then what?" for each affected group, and "What happens when this scales 10x?"

6. What is the minimum viable content of a decision record, and why does undocumented decision-making become folklore?

Show answer Minimum: date, decision, context, options considered (including "do nothing"), rationale, tradeoffs accepted, risks acknowledged, review date, and decision-makers. Without documentation, decisions become folklore that mutates over time. Six months later, nobody remembers why PostgreSQL was chosen over DynamoDB, and the new architect proposes the same migration that was already rejected. A 10-minute decision record saves hours of re-litigation.

7. What is groupthink, what conditions produce it, and how do you prevent it in engineering teams?

Show answer Groupthink occurs when a cohesive group prioritizes consensus over critical evaluation, leading to poor decisions. Conditions: strong leader who states opinion first, homogeneous group, pressure to agree, isolation from outside perspectives. Prevention: 1) Leader speaks last. 2) Assign a devil's advocate role that rotates. 3) Anonymous input before discussion (written votes, surveys). 4) Invite outside perspectives. 5) Normalize dissent — explicitly ask "who disagrees and why?" Historical examples: Challenger disaster, Bay of Pigs. In engineering: deploying without load testing because everyone "felt good about it."

8. Why is the confusion between familiarity and mastery one of the most dangerous metacognitive failures?

Show answer Familiarity means you recognize something when you see it. Mastery means you can produce, apply, or explain it without prompts. The brain confuses these because recognition is easy and feels like knowing. You re-read notes, the concepts look familiar, and you feel prepared — then fail the test because you never practiced retrieval. This is dangerous because: (1) it creates false confidence that stops further study, (2) it is self-reinforcing (the more you re-read, the more familiar it feels), (3) it is invisible until tested under real conditions.

9. How does sunk cost interact with metacognition, and what signal tells you to abandon a failing approach?

Show answer Sunk cost makes you continue a failing approach because of the effort already invested — "I have spent three hours on this method, so switching would waste that time." This is a metacognitive failure because the time is already gone regardless. The signal to switch: high effort with low progress AND no new information being generated. Ask: "Am I learning something from this difficulty or just repeating the same failure?" If the failure is identical each time, the approach is wrong, not the effort level. Good metacognition treats past time as irrelevant to future strategy choices.

10. What does it mean to treat metacognition as "observability for the mind" and what are the practical monitoring checkpoints?

Show answer Just as system observability uses metrics, logs, and traces to understand what software is doing, metacognitive observability means instrumenting your own thinking process. Practical monitoring checkpoints: (1) before a task — "What am I trying to achieve and how will I know it is working?" (2) during a task (set a timer) — "Am I making progress? Should I change approach?" (3) at confusion — "What specifically am I confused about?" (4) after a task — "What assumption was wrong? What worked? What would I change?" These checkpoints convert invisible thinking failures into visible, fixable patterns.

11. What is activation energy in the context of task initiation, and how does a friction audit reduce it?

Show answer Activation energy is the initial effort required to start a task -- analogous to the energy needed to start a chemical reaction. Most tasks require more energy to start than to sustain. A friction audit lists every step between "I decide to start" and "I am doing productive work," then eliminates, automates, or batches as many as possible. Common friction: setup steps, decision friction ("which topic?"), context friction ("where did I leave off?"), and emotional friction ("this will be hard").

12. What is energy matching and why is attempting deep learning when depleted a scheduling error, not a discipline failure?

Show answer Energy matching means assigning tasks to time slots based on your cognitive capacity at that time. Peak energy (first 2-3 hours): new learning, hard problems. Medium energy: practice, review. Low energy: maintenance, organization. Depleted: minimum viable session only. Trying deep learning when exhausted is not a willpower failure -- it is bad resource scheduling, like running a CPU-intensive workload on an overloaded node. Track your energy patterns for one week to find your actual peak.

13. Why does the minimum viable session (5 minutes) matter more than the perfect session for long-term progress?

Show answer The biggest threat to learning is not a bad session but a skipped session that turns into a skipped week. The minimum viable session (5 minutes: one flashcard review, one page, one bullet point) maintains the streak of contact with the material, keeping neural pathways warm and preventing the avoidance spiral from building momentum. The standard is low: if you touched the material, you win. A 5-minute session protects against the all-or-nothing thinking that makes people skip entirely when they cannot do a full session.

14. What is mechanism design and how does it differ from standard game theory?

Show answer Standard game theory analyzes existing games: given the rules and incentives, what will players do? Mechanism design works in reverse: given the desired outcome, what rules and incentives should you create? It is "reverse game theory" — designing the game so that self-interested players naturally produce the desired collective result. Examples: auction design, incentive structures, voting systems, and performance review processes.

15. What is information asymmetry and what problems does it create in strategic interactions?

Show answer Information asymmetry exists when one party knows something the other does not. It creates two major problems: adverse selection (before a deal — the uninformed party gets stuck with bad options, like buying a used car) and moral hazard (after a deal — the informed party takes risks because the other bears the cost, like insured drivers being less careful). Solutions include signaling, screening, reputation systems, and transparency requirements.

16. Why do repeated games produce different outcomes than one-shot games, and what is the shadow of the future?

Show answer In one-shot games, defection is often rational because there is no future consequence. In repeated games, the "shadow of the future" — the prospect of future interactions — makes cooperation viable because today's defection invites tomorrow's punishment. The longer and more certain the future relationship, the stronger the incentive to cooperate now. This is why reputation, long-term contracts, and small communities sustain cooperation that anonymous one-shot interactions cannot.

17. What is the core difference between Bayesian and frequentist approaches to probability?

Show answer Frequentist probability is about long-run frequencies of repeatable events — a coin's probability is defined by what happens over many flips. Bayesian probability represents degrees of belief updated by evidence — you start with a prior (what you believed before), encounter data, and compute a posterior (updated belief) using Bayes' theorem. Frequentists ask "how likely is this data given the hypothesis?" Bayesians ask "how likely is the hypothesis given this data?" In practice, Bayesian reasoning is closer to how real decisions work because you almost always have prior information.

18. What is regression to the mean, and why does it create false narratives about interventions?

Show answer Regression to the mean is the statistical tendency for extreme measurements to be followed by less extreme ones, simply because extreme values include a large random component. It creates false narratives because people intervene after extreme results (e.g., punishing after a terrible performance, applying a treatment after peak symptoms) and then credit the intervention when things naturally return toward average. The improvement would have happened anyway.

19. What is denominator blindness and how does it distort risk perception?

Show answer Denominator blindness is focusing on the numerator (the dramatic count) while ignoring the denominator (the total population at risk). "500 people died from X" sounds terrifying, but if 200 million were exposed, the risk is 0.00025%. It distorts risk perception in headlines, medical scares, and policy debates. The fix: always ask "out of how many?" and convert counts to rates before comparing risks.

20. How does defense in depth work and why is a single control insufficient?

Show answer Defense in depth uses multiple independent layers of protection so that if one fails, others still block the hazard. Layers include prevention (stop it from happening), detection (notice it quickly), mitigation (reduce impact), and recovery (restore normal state). A single control is insufficient because every control has failure modes — human error, edge cases, maintenance gaps. The question is not whether a control can fail, but what catches the failure when it does.

21. Why are near misses the most valuable and most wasted signal in risk management?

Show answer Near misses reveal the same systemic weaknesses that cause real accidents, but without the damage — they are free lessons. They are wasted because: (1) success bias — "nothing bad happened" discourages investigation, (2) reporting friction — near misses are underreported if blame culture exists, (3) they lack the emotional urgency of actual failures. Organizations that systematically collect and analyze near misses find and fix vulnerabilities before they produce real harm.

22. What is optimism bias in risk management and what structural defenses exist against it?

Show answer Optimism bias is the systematic tendency to underestimate the probability and severity of negative outcomes while overestimating positive ones. It is not laziness — it is a hardwired cognitive pattern. Structural defenses: (1) pre-mortems force failure imagination, (2) reference class forecasting uses historical data from similar projects instead of inside-view optimism, (3) red teams adversarially probe plans, (4) mandatory buffer ratios (e.g., add 30% to time/cost estimates). No amount of individual awareness reliably beats optimism bias without structural support.

23. What is FOMO engineering and how is it used to manipulate decisions?

Show answer FOMO (Fear Of Missing Out) engineering is deliberately creating urgency, scarcity, or exclusivity to pressure decisions before rational evaluation. Techniques: countdown timers, "only 3 left" notifications, limited-time offers, social proof showing others buying, and "everyone is talking about this." It works by activating loss aversion (losing hurts more than gaining feels good) and social anxiety. Defense: any decision that must be made RIGHT NOW is usually a decision someone does not want you to think about.

24. How does framing make objectively true information function as propaganda?

Show answer Framing controls interpretation by selecting which true facts to emphasize, what context to include or omit, what comparison to use, and what emotional register to set. A company laying off 10% of workers can be framed as "1,000 families devastated" or "efficient restructuring saves 9,000 jobs" — both true, radically different conclusions. Propaganda often works not by lying but by selecting, sequencing, and framing true facts to guide the audience toward a predetermined interpretation. The omitted context is often the real weapon.

25. What is a verification ladder and when should you climb it?

Show answer A verification ladder is a hierarchy of evidence standards matched to claim severity. Low-stakes claims (weather forecast) need minimal verification. Medium-stakes claims (a policy change at work) warrant checking the primary source. High-stakes claims (health decisions, major financial moves, sharing content that could harm someone) require multiple independent sources, checking the original data, and evaluating incentives. Climb the ladder when: the claim triggers strong emotion, asks you to act urgently, or could cause harm if wrong.