Every piece of digital information — from the text you are reading right now to the colors on your screen — is ultimately represented as numbers in different bases. Your computer thinks in binary (base 2), programmers shorthand in hexadecimal (base 16), and humans count in decimal (base 10). Understanding how to convert between these number systems is not just an academic exercise — it is a practical skill that you will use in programming, networking, web development, and data analysis.
This tutorial provides a thorough grounding in the four most important number systems used in computing, teaches you the conversion methods through worked examples, shows how these conversions apply in real programming scenarios, and introduces a free tool that handles all the math for you.
A number base (also called a radix) defines how many unique digits a system uses and how the position of each digit determines its value. In our everyday decimal system, the number 305 means:
305₁₀ = (3 × 10²) + (0 × 10¹) + (5 × 10⁰) = 300 + 0 + 5
Each position represents a power of the base. The rightmost digit is the "ones" place (base⁰), the next is the "base" place (base¹), and so on. This positional system works identically regardless of the base — only the digits and the multiplier change.
Binary is the native language of computers. Every transistor, every memory cell, every bit of storage is either on or off — represented as 1 or 0. Binary uses only two digits: 0 and 1.
Binary numbers grow long quickly because each digit carries so little information. The number 13 in binary is 1101, and a single byte (8 bits) can represent values from 0 to 255.
1101₂ = (1×2³) + (1×2²) + (0×2¹) + (1×2⁰) = 8 + 4 + 0 + 1 = 13₁₀
Binary is essential for understanding how computers work at the lowest level. Bitwise operations (AND, OR, XOR, shifts) operate directly on binary digits. Network subnet masks are defined in binary. File permissions on Unix systems use binary flags.
| Decimal | Binary | Use Case |
|---|---|---|
| 0 | 00000000 | Null / off / false |
| 1 | 00000001 | On / true / first flag |
| 255 | 11111111 | Maximum byte value |
| 256 | 100000000 | 1 kilobyte (in binary) |
| 1024 | 10000000000 | 1 KB in computing |
Octal uses digits 0 through 7. It groups binary digits into sets of three: since 2³ = 8, every octal digit maps exactly to three binary digits. This made octal popular in the early days of computing when systems used word sizes that were multiples of 3 bits (like the PDP-8 with 12-bit words).
Octal 753 = Binary 111 101 011 = (111)(101)(011)
Each octal digit directly maps to three binary digits, making conversion trivial.
Today, octal's primary surviving use is in Unix/Linux file permissions. When you run chmod 755 script.sh, the 755 is an octal number where each digit represents permissions for owner, group, and others:
Decimal is the number system humans use naturally, probably because we have ten fingers. It uses digits 0 through 9 and is the default for almost all human-facing numeric input and output.
In computing, decimal is the "lingua franca" — the format that bridges human understanding with machine processing. When you type a number into a web form, it is in decimal. When a program displays a result, it converts from its internal binary representation to decimal for you to read.
Hexadecimal (hex) uses digits 0-9 and letters A-F (representing values 10-15). It is the most widely used non-decimal base in modern computing because it maps perfectly to binary: since 2⁴ = 16, one hex digit represents exactly four binary digits (a nibble), and two hex digits represent one byte.
Hex FF = Binary 1111 1111 = Decimal 255
Hex 8B5CF6 = Binary 1000 1011 0101 1100 1111 0110
Hexadecimal appears everywhere in computing:
Divide the decimal number by 2 repeatedly, recording the remainder each time. Read the remainders from bottom to top.
Convert 42 to binary:
42 ÷ 2 = 21 remainder 0
21 ÷ 2 = 10 remainder 1
10 ÷ 2 = 5 remainder 0
5 ÷ 2 = 2 remainder 1
2 ÷ 2 = 1 remainder 0
1 ÷ 2 = 0 remainder 1
Read bottom to top: 101010
Multiply each digit by 2 raised to its position (starting from 0 on the right) and sum all values. This is shown in the binary section above with the example 1101₂ = 13₁₀.
Same division method as binary, but divide by 16 instead. Remainders 0-9 are used directly; remainders 10-15 become A-F.
Convert 300 to hex:
300 ÷ 16 = 18 remainder 12 (C)
18 ÷ 16 = 1 remainder 2
1 ÷ 16 = 0 remainder 1
Result: 12C
Multiply each digit by 16 raised to its position and sum. For hex 12C: (1×16²) + (2×16¹) + (12×16⁰) = 256 + 32 + 12 = 300.
This is the easiest conversion — group binary digits into sets of 4 (pad with leading zeros if needed), then convert each group directly to a hex digit.
Binary 1010 1100 1110 = Hex A C E = ACE
Octal can serve as a stepping stone for conversions: Binary → Octal (group by 3) → Decimal, or Decimal → Octal → Binary. However, since hex aligns better with byte boundaries, hex is generally preferred as an intermediate base in modern computing.
// Decimal to binary, hex, octal
const num = 42;
console.log(num.toString(2)); // "101010"
console.log(num.toString(16)); // "2a"
console.log(num.toString(8)); // "52"
// Parse from any base
parseInt('101010', 2); // 42
parseInt('2a', 16); // 42
parseInt('52', 8); // 42
// CSS color manipulation
const hex = '#8b5cf6';
const r = parseInt(hex.slice(1, 3), 16); // 139
const g = parseInt(hex.slice(3, 5), 16); // 92
const b = parseInt(hex.slice(5, 7), 16); // 246
# Decimal to other bases
num = 42
bin(num) # '0b101010'
hex(num) # '0x2a'
oct(num) # '0o52'
# Parse from any base
int('101010', 2) # 42
int('2a', 16) # 42
int('52', 8) # 42
# Format strings
f"{num:b}" # '101010'
f"{num:x}" # '2a'
f"{num:o}" # '52'
f"{num:#06x}" # '0x002a' (zero-padded with prefix)
# Custom base conversion (up to base 36)
def to_base(n, base):
digits = '0123456789abcdefghijklmnopqrstuvwxyz'
if n == 0: return '0'
result = []
while n > 0:
result.append(digits[n % base])
n //= base
return ''.join(reversed(result))
Supports binary, octal, decimal, hexadecimal, and custom bases up to 36. Instant conversion with step-by-step explanations.
→ Open Number Base Converter| Decimal | Binary | Octal | Hex |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 10 | 1010 | 12 | A |
| 16 | 10000 | 20 | 10 |
| 32 | 100000 | 40 | 20 |
| 64 | 1000000 | 100 | 40 |
| 128 | 10000000 | 200 | 80 |
| 255 | 11111111 | 377 | FF |
| 256 | 100000000 | 400 | 100 |