Quiz: Feature Flags¶
6 questions
L1 (4 questions)¶
1. What is the difference between a release flag, an experiment flag, and an operational flag?
Show answer
Release flags control feature rollout (enable feature for 10% of users, then ramp up). Experiment flags support A/B testing (variant A vs B, measure conversion). Operational flags are kill switches for runtime behavior (disable a feature under load, toggle between backends). Release flags are temporary (remove after full rollout). Experiment flags are temporary (remove after experiment concludes). Operational flags may be long-lived. Failing to categorize leads to tech debt — stale flags accumulate.2. How do you implement a feature flag system that does not add latency to every request?
Show answer
Use local evaluation with periodic sync: the SDK fetches flag configurations from the server at startup and polls for updates every 30-60 seconds (or uses server-sent events / streaming for near-real-time). Flags are evaluated locally in-process against cached rules — no network call per evaluation. Fallback to hardcoded defaults if the flag service is unreachable. This is how LaunchDarkly, Unleash, and Flagsmith work. Never make a synchronous API call per flag check in the request path.3. What is flag debt and how do you prevent it from accumulating?
Show answer
Flag debt is the accumulation of stale feature flags that are no longer needed — they clutter code with conditional branches, confuse new developers, and create untested code paths. Prevention:1. Set an expiration date when creating every flag.
2. Add automated lint rules that flag stale flags (e.g., flag created > 90 days ago and still in code).
3. Track flags in a central registry with ownership.
4. Include flag cleanup in the definition of done for feature work.
5. Run periodic flag audits (quarterly).
4. What is percentage-based rollout vs user-segment targeting, and when do you use each?
Show answer
Percentage-based rollout enables a feature for a random N% of users — good for gradual ramp-up to catch performance issues. User-segment targeting enables it for specific groups (internal employees, beta testers, users in a region) — good for targeted testing and compliance. Use percentage for general availability rollouts. Use segments for dogfooding (internal first), geographic compliance (GDPR features for EU users), and A/B experiments (specific cohorts). Most flag systems support both and they can be combined.L2 (2 questions)¶
1. A feature flag rollout to 5% of users caused a 3x increase in database queries. How do you investigate and remediate without a full rollback?
Show answer
1. Confirm correlation: check if the affected database load aligns with the flagged user segment.2. Reduce exposure: lower the rollout to 1% or target only internal users.
3. Profile: enable query logging or APM tracing for flagged users to identify the new code path's query pattern.
4. Fix: add missing indexes, batch queries, or add caching in the new code path.
5. Validate: re-enable at 5% with monitoring, then ramp. The flag gave you a safe way to isolate and debug without rolling back the entire deployment.
2. How do feature flags interact with database migrations, and what can go wrong?