Guides Precision & Accuracy

Precision & Accuracy in Computing

Why 0.1 + 0.2 doesn't equal 0.3, and how to work with decimal numbers safely in programming.

7 minute read Advanced

The Problem with Decimals

Try this in any programming language:

console.log(0.1 + 0.2);  // Output: 0.30000000000000004
console.log(0.1 + 0.2 === 0.3);  // Output: false

This isn't a bug—it's how computers represent decimal numbers. Understanding why this happens is crucial for anyone working with financial calculations, scientific data, or any application where precision matters.

Real-World Consequences

Precision errors caused financial losses, missile failures, and countless software bugs. The Patriot missile failure in 1991 killed 28 soldiers due to accumulated floating-point errors.

Binary Representation

Computers store numbers in binary (base-2), using only 0s and 1s. While whole numbers convert cleanly to binary, many decimal fractions become infinite repeating patterns.

Decimal vs Binary Fractions

In base-10 (decimal), 1/3 = 0.333... (repeating). In base-2 (binary), 1/10 has the same problem:

Decimal: 0.1
Binary:  0.0001100110011001100... (repeating infinitely)

Just as we can't represent 1/3 exactly in decimal, computers can't represent 0.1 exactly in binary. They must round after a certain number of digits.

Which Decimals Are Exact in Binary?

Exact in binary: 0.5, 0.25, 0.125, 0.75 (powers of 2: 1/2, 1/4, 1/8, 3/4)

Repeating in binary: 0.1, 0.2, 0.3, 0.6, 0.7, 0.9 (and most other decimals)

This is why 0.5 + 0.25 === 0.75 works perfectly, but 0.1 + 0.2 === 0.3 fails.

Floating-Point Numbers (IEEE 754)

Most languages use the IEEE 754 standard for storing decimal numbers. A 64-bit "double precision" float has three parts:

  • 1 bit: Sign (positive or negative)
  • 11 bits: Exponent (determines magnitude)
  • 52 bits: Mantissa/Significand (the digits)

How It Works

A number is stored as: (-1)sign × 2exponent × 1.mantissa

For 0.1, the computer stores an approximation:

0.1000000000000000055511151231257827021181583404541015625

When you add two approximations (0.1 + 0.2), the error compounds:

0.1 (approx) + 0.2 (approx) = 0.30000000000000004

Precision Limits

  • 64-bit float: ~15-17 decimal digits of precision
  • 32-bit float: ~6-9 decimal digits of precision

This means you can reliably work with numbers like 123456789012345 but lose precision beyond that.

Solutions & Workarounds

1. Use Integer Arithmetic (Recommended for Money)

Store money in cents (smallest currency unit), not dollars:

// ❌ Bad: Floating-point money
let price = 0.1 + 0.2;  // 0.30000000000000004

// ✅ Good: Integer cents
let priceCents = 10 + 20;  // 30 cents
let priceDollars = priceCents / 100;  // $0.30

2. Use Decimal Libraries

For applications requiring exact decimal math:

// JavaScript with decimal.js
const Decimal = require('decimal.js');
let result = new Decimal(0.1).plus(0.2);  // Exactly 0.3

// Python built-in
from decimal import Decimal
result = Decimal('0.1') + Decimal('0.2')  # Exactly 0.3
Decimal Libraries

JavaScript: decimal.js, bignumber.js
Python: decimal module (built-in)
Java: BigDecimal
C#: decimal type (built-in)
Go: shopspring/decimal

3. Never Compare Floats with ===

Use epsilon comparison instead:

// ❌ Bad
if (0.1 + 0.2 === 0.3) { /* never true */ }

// ✅ Good: Check if difference is tiny
const EPSILON = 1e-10;
if (Math.abs((0.1 + 0.2) - 0.3) < EPSILON) {
    // Considered equal
}

4. Round When Displaying

Round to a sensible number of decimal places for display:

let result = 0.1 + 0.2;
console.log(result.toFixed(2));  // "0.30"
console.log(Math.round(result * 100) / 100);  // 0.3

5. Use Fixed-Point Arithmetic

For currencies, multiply by a power of 10 to eliminate decimals:

// Store $12.34 as 1234 cents
const dollars = 12.34;
const cents = Math.round(dollars * 100);  // 1234

// Calculate with integers
const tax = Math.round(cents * 0.08);  // 99 cents
const total = cents + tax;  // 1333 cents = $13.33

Best Practices

Financial Applications

  • Store money as integers in the smallest unit (cents, pennies)
  • Use decimal libraries for complex calculations
  • Round before displaying to user (never internally)
  • Validate inputs to prevent precision loss

Scientific Computing

  • Understand your precision requirements upfront
  • Use double precision (64-bit) by default
  • Consider error accumulation in iterative algorithms
  • Document significant figures in results

General Programming

  • Never use floats as loop counters: for (let i = 0.1; i < 1; i += 0.1) is unpredictable
  • Never use floats as dictionary keys or in hash sets
  • Be aware of language defaults: JavaScript uses 64-bit floats for all numbers
  • Test edge cases: Very large numbers, very small numbers, zero, negative numbers
Database Storage

In databases, use DECIMAL or NUMERIC types for money, not FLOAT or DOUBLE. For example: DECIMAL(10, 2) stores up to 8 digits before decimal and exactly 2 after.

When Floating-Point IS Appropriate

Floating-point is fine when:

  • Exact values don't matter (graphics, physics simulations)
  • You're measuring imprecise real-world data (temperatures, distances)
  • Performance is critical and small errors are acceptable
  • You're working with scientific notation (very large or small numbers)
Key Takeaways
  • Computers can't represent most decimal fractions exactly in binary
  • Never compare floats with ===—use epsilon comparison
  • For money: use integers (cents) or decimal libraries
  • For display: round to appropriate decimal places
  • Understand your precision requirements before choosing data types

Testing Your Understanding

Try these in your programming language's console:

0.1 + 0.1 + 0.1 === 0.3  // What do you expect?
0.3 - 0.2 === 0.1  // True or false?
0.1 * 3 === 0.3  // Check this one
Math.ceil(0.1 + 0.2)  // What's the result?

Understanding these quirks makes you a better programmer and prevents costly bugs in production systems.