Personal Dev Critical Thinking¶
10 cards — 🟢 3 easy | 🟡 4 medium | 🔴 3 hard
🟢 Easy (3)¶
1. What is confirmation bias and why is it especially dangerous during incident response?
Show answer
Confirmation bias is the tendency to search for and interpret evidence that supports your existing hypothesis while filtering out contradictory evidence. During incident response, it causes you to latch onto the first plausible cause and interpret ambiguous signals as confirmation, delaying root cause identification. Countermeasure: before investigating, write down what evidence would disprove your hypothesis and search for that first.2. What is anchoring bias and how does it distort engineering estimates?
Show answer
Anchoring is when the first number you encounter disproportionately influences your estimate. If someone says a migration will take 6 months, all subsequent estimates gravitate toward that number regardless of its basis. Countermeasure: generate your estimate independently before hearing others, and use reference class forecasting -- how long did similar projects actually take?3. What is the availability heuristic and how does it skew engineering priorities?
Show answer
The availability heuristic is overweighting information that is easy to recall -- recent events, dramatic failures, or personal experiences. A spectacular outage that made Hacker News gets more mental weight than a slow memory leak costing 10x more over a year. Countermeasure: check the data rather than your memory, and ask whether you are giving something weight because it is important or because it is vivid.🟡 Medium (4)¶
1. What is the sunk cost fallacy and how does it show up in long-running engineering projects?
Show answer
The sunk cost fallacy is continuing to invest in a failing approach because of prior investment. It sounds like "We spent 8 months on this rewrite, we cannot stop now." The 8 months are gone regardless of your next decision. Countermeasure: evaluate decisions based on future costs and benefits only. Ask: if starting fresh today, would I choose this path?2. What is steelmanning and how does it differ from strawmanning?
Show answer
Steelmanning is constructing the strongest possible version of an argument you disagree with, then critiquing that version. It is the opposite of strawmanning, which attacks a weak caricature. Process: state the opposing view, find the strongest evidence for it, identify what must be true for it to be correct, then critique that strongest version. This produces better analysis and earns credibility.3. What is the core habit of Bayesian thinking in practice, without formal math?
Show answer
Start with a prior belief (how likely something is before new evidence), observe evidence, then update the belief proportionally to how surprising the evidence is. Strong priors need strong evidence to move; weak priors should update quickly. The anti-pattern is anchoring on the prior and refusing to update. When evidence conflicts with your prior, the correct response is to update, not to dismiss the evidence.4. What are the five claim types and why does identifying them matter before evaluating an argument?
Show answer
The five types are factual (requires measurement), causal (requires timeline plus mechanism plus exclusion of alternatives), predictive (requires model assumptions and sensitivity analysis), normative (requires values and tradeoff analysis), and definitional (requires agreed criteria). Most arguments go sideways because participants argue different claim types without realizing it -- one makes a causal claim while the other contests the normative implication.🔴 Hard (3)¶
1. What is base rate neglect and why does it matter for monitoring and alerting systems?
Show answer
Base rate neglect is ignoring the overall probability of a condition when evaluating a test result. A monitoring alert with a 95% true-positive rate sounds great until the base rate of the condition is 0.1%, meaning most alerts are still false positives. Without considering the base rate, you either drown in false alerts or dismiss real ones. Always ask: how common is this condition in the population being tested?2. Why does the pre-mortem reframe ("it failed -- tell me why") produce better risk analysis than asking "what could go wrong?"?
Show answer
What could go wrong? triggers defensive minimization -- people downplay risks to appear confident. The pre-mortem reframe ("It is six months from now and this project failed completely -- write the story of why") gives social permission to voice doubts because it assumes failure as a given. This surfaces risks that optimism suppresses, and participants generate more specific and honest failure modes.3. How does groupthink suppress dissent without explicit censorship, and what are three countermeasures?