Skip to content

Binary and Floats

Understanding binary representation and floating-point arithmetic is essential for debugging data corruption, precision bugs, and encoding issues that surface in log analysis, config parsing, and systems programming. This topic demystifies how computers actually store and manipulate numbers.

Why this matters

Floating-point surprises show up everywhere: currency calculations that lose pennies, JSON round-trip precision loss, timestamps that drift, and config values that silently truncate. Knowing the underlying representation turns these from mysterious bugs into predictable behavior.

Key concepts covered

  • Binary encoding: two's complement, sign extension, bitwise operations
  • IEEE 754 floating-point: representation, special values (NaN, Inf, denormals)
  • Precision traps: why decimal fractions cannot be represented exactly in binary
  • Endianness: big-endian vs little-endian and when it matters (network protocols, file formats)

Contents

Start with the primer for foundational concepts, then explore misconceptions and pitfalls before moving to hands-on techniques.

# File What it covers
1 Primer Core concepts — binary encoding, IEEE 754, two's complement, and why 0.1 + 0.2 != 0.3
2 Anti-Primer Common misconceptions and assumptions that lead engineers astray
3 Footguns & Pitfalls Real-world bugs caused by floating-point rounding, integer overflow, and endianness
4 Street Ops Practical recipes for inspecting binary data, debugging precision issues, and safe comparisons
5 Trivia & Interesting Facts Surprising facts about binary and floating-point that deepen intuition