Number Base Converter: Convert Between Number Systems

📅 April 13, 2026 ⏱️ 10 min read 📂 Math Tools

Every piece of digital information — from the text you are reading right now to the colors on your screen — is ultimately represented as numbers in different bases. Your computer thinks in binary (base 2), programmers shorthand in hexadecimal (base 16), and humans count in decimal (base 10). Understanding how to convert between these number systems is not just an academic exercise — it is a practical skill that you will use in programming, networking, web development, and data analysis.

This tutorial provides a thorough grounding in the four most important number systems used in computing, teaches you the conversion methods through worked examples, shows how these conversions apply in real programming scenarios, and introduces a free tool that handles all the math for you.

Understanding Number Bases

A number base (also called a radix) defines how many unique digits a system uses and how the position of each digit determines its value. In our everyday decimal system, the number 305 means:

305₁₀ = (3 × 10²) + (0 × 10¹) + (5 × 10⁰) = 300 + 0 + 5

Each position represents a power of the base. The rightmost digit is the "ones" place (base⁰), the next is the "base" place (base¹), and so on. This positional system works identically regardless of the base — only the digits and the multiplier change.

Binary (Base 2)

Binary is the native language of computers. Every transistor, every memory cell, every bit of storage is either on or off — represented as 1 or 0. Binary uses only two digits: 0 and 1.

Binary numbers grow long quickly because each digit carries so little information. The number 13 in binary is 1101, and a single byte (8 bits) can represent values from 0 to 255.

1101₂ = (1×2³) + (1×2²) + (0×2¹) + (1×2⁰) = 8 + 4 + 0 + 1 = 13₁₀

Binary is essential for understanding how computers work at the lowest level. Bitwise operations (AND, OR, XOR, shifts) operate directly on binary digits. Network subnet masks are defined in binary. File permissions on Unix systems use binary flags.

Common Binary Patterns

DecimalBinaryUse Case
000000000Null / off / false
100000001On / true / first flag
25511111111Maximum byte value
2561000000001 kilobyte (in binary)
1024100000000001 KB in computing

Octal (Base 8)

Octal uses digits 0 through 7. It groups binary digits into sets of three: since 2³ = 8, every octal digit maps exactly to three binary digits. This made octal popular in the early days of computing when systems used word sizes that were multiples of 3 bits (like the PDP-8 with 12-bit words).

Octal 753 = Binary 111 101 011 = (111)(101)(011)

Each octal digit directly maps to three binary digits, making conversion trivial.

Today, octal's primary surviving use is in Unix/Linux file permissions. When you run chmod 755 script.sh, the 755 is an octal number where each digit represents permissions for owner, group, and others:

Decimal (Base 10)

Decimal is the number system humans use naturally, probably because we have ten fingers. It uses digits 0 through 9 and is the default for almost all human-facing numeric input and output.

In computing, decimal is the "lingua franca" — the format that bridges human understanding with machine processing. When you type a number into a web form, it is in decimal. When a program displays a result, it converts from its internal binary representation to decimal for you to read.

Hexadecimal (Base 16)

Hexadecimal (hex) uses digits 0-9 and letters A-F (representing values 10-15). It is the most widely used non-decimal base in modern computing because it maps perfectly to binary: since 2⁴ = 16, one hex digit represents exactly four binary digits (a nibble), and two hex digits represent one byte.

Hex FF = Binary 1111 1111 = Decimal 255

Hex 8B5CF6 = Binary 1000 1011 0101 1100 1111 0110

Hexadecimal appears everywhere in computing:

Conversion Methods

Decimal to Binary

Divide the decimal number by 2 repeatedly, recording the remainder each time. Read the remainders from bottom to top.

Convert 42 to binary:

42 ÷ 2 = 21 remainder 0

21 ÷ 2 = 10 remainder 1

10 ÷ 2 = 5 remainder 0

5 ÷ 2 = 2 remainder 1

2 ÷ 2 = 1 remainder 0

1 ÷ 2 = 0 remainder 1

Read bottom to top: 101010

Binary to Decimal

Multiply each digit by 2 raised to its position (starting from 0 on the right) and sum all values. This is shown in the binary section above with the example 1101₂ = 13₁₀.

Decimal to Hexadecimal

Same division method as binary, but divide by 16 instead. Remainders 0-9 are used directly; remainders 10-15 become A-F.

Convert 300 to hex:

300 ÷ 16 = 18 remainder 12 (C)

18 ÷ 16 = 1 remainder 2

1 ÷ 16 = 0 remainder 1

Result: 12C

Hexadecimal to Decimal

Multiply each digit by 16 raised to its position and sum. For hex 12C: (1×16²) + (2×16¹) + (12×16⁰) = 256 + 32 + 12 = 300.

Binary to Hexadecimal (and Back)

This is the easiest conversion — group binary digits into sets of 4 (pad with leading zeros if needed), then convert each group directly to a hex digit.

Binary 1010 1100 1110 = Hex A C E = ACE

Using Octal as an Intermediate

Octal can serve as a stepping stone for conversions: Binary → Octal (group by 3) → Decimal, or Decimal → Octal → Binary. However, since hex aligns better with byte boundaries, hex is generally preferred as an intermediate base in modern computing.

Programming Applications

JavaScript

// Decimal to binary, hex, octal
const num = 42;
console.log(num.toString(2));   // "101010"
console.log(num.toString(16));  // "2a"
console.log(num.toString(8));   // "52"

// Parse from any base
parseInt('101010', 2);   // 42
parseInt('2a', 16);      // 42
parseInt('52', 8);       // 42

// CSS color manipulation
const hex = '#8b5cf6';
const r = parseInt(hex.slice(1, 3), 16); // 139
const g = parseInt(hex.slice(3, 5), 16); // 92
const b = parseInt(hex.slice(5, 7), 16); // 246

Python

# Decimal to other bases
num = 42
bin(num)    # '0b101010'
hex(num)    # '0x2a'
oct(num)    # '0o52'

# Parse from any base
int('101010', 2)   # 42
int('2a', 16)      # 42
int('52', 8)       # 42

# Format strings
f"{num:b}"    # '101010'
f"{num:x}"    # '2a'
f"{num:o}"    # '52'
f"{num:#06x}" # '0x002a' (zero-padded with prefix)

# Custom base conversion (up to base 36)
def to_base(n, base):
    digits = '0123456789abcdefghijklmnopqrstuvwxyz'
    if n == 0: return '0'
    result = []
    while n > 0:
        result.append(digits[n % base])
        n //= base
    return ''.join(reversed(result))

Real-World Use Cases

Convert Numbers Between Any Base

Supports binary, octal, decimal, hexadecimal, and custom bases up to 36. Instant conversion with step-by-step explanations.

→ Open Number Base Converter

Quick Reference Table

DecimalBinaryOctalHex
0000
10101012A
16100002010
321000004020
64100000010040
1281000000020080
25511111111377FF
256100000000400100

FAQ

What is a number base?
A number base (or radix) is the number of unique digits used to represent numbers in a positional numeral system. Decimal uses base 10 (digits 0-9), binary uses base 2 (digits 0-1), octal uses base 8 (digits 0-7), and hexadecimal uses base 16 (digits 0-9 and A-F).
How do you convert binary to decimal?
Multiply each binary digit by 2 raised to its position (starting from 0 on the right) and sum all values. For example, binary 1011 = (1×2³) + (0×2²) + (1×2¹) + (1×2⁰) = 8 + 0 + 2 + 1 = 11 in decimal.
Why do programmers use hexadecimal?
Hexadecimal (base 16) represents 4 binary digits with a single hex digit, making it extremely compact for representing binary data. It is used for memory addresses, color codes, MAC addresses, and debugging because one byte (8 bits) maps to exactly two hex digits.
What is the difference between octal and hexadecimal?
Octal uses base 8 (digits 0-7) and groups binary digits in sets of 3. Hexadecimal uses base 16 (digits 0-9, A-F) and groups binary digits in sets of 4. Hexadecimal is far more common in modern computing because it aligns with byte boundaries (8 bits = 2 hex digits).
Can a number base be negative or fractional?
In theory, yes, but these are primarily mathematical curiosities. Negative bases (like base -2) and fractional bases (like base 1.618, the golden ratio) have interesting mathematical properties but are not used in practical computing. Real-world computing uses bases 2, 8, 10, and 16.