Binary Decimal Hex Octal Converter
Photo by Markus Spiske on Unsplash
Key Takeaways
- Positional number systems (binary, decimal, hex, octal) all use the same principle: each digit's value depends on its position, multiplied by the base raised to that position's power.
- Hexadecimal is the preferred compact notation for binary data in programming because each hex digit maps to exactly 4 bits.
- Two's complement is the universal standard for representing signed integers in computers, allowing addition and subtraction to share the same hardware circuitry.
Understanding Positional Number Systems
All the number systems we work with in computing -- binary, octal, decimal, and hexadecimal -- are positional number systems. This means the value of each digit depends not just on the digit itself but on its position within the number. The base (or radix) of the system determines how many unique digits are available and how the positional values scale.
In decimal (base-10), each position represents a power of 10: ones, tens, hundreds, thousands. The number 4,732 equals (4 × 103) + (7 × 102) + (3 × 101) + (2 × 100). The exact same principle applies to every other base. In binary (base-2), 1101 equals (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20) = 8 + 4 + 0 + 1 = 13.
Hexadecimal (base-16) extends beyond the familiar ten digits by using letters A through F to represent values 10 through 15. The hex number 2F equals (2 × 161) + (15 × 160) = 32 + 15 = 47 in decimal. This system's power comes from its alignment with binary: since 16 = 24, each hex digit corresponds to exactly four binary digits, making hex-to-binary and binary-to-hex conversion trivially simple.
Octal (base-8) uses digits 0 through 7. Since 8 = 23, each octal digit maps to exactly three binary digits. The octal number 377 equals (3 × 64) + (7 × 8) + (7 × 1) = 192 + 56 + 7 = 255 in decimal, which is 11111111 in binary -- the maximum value of a single byte. This tool is part of our comprehensive binary converters collection.
Conversion Algorithms Explained
Converting from any base to decimal uses the expansion method: multiply each digit by its positional power and sum. Converting from decimal to another base uses the repeated division method: divide the number by the target base, record the remainder, and repeat until the quotient is zero. The remainders, read from bottom to top, form the result.
For example, to convert decimal 42 to binary: 42 / 2 = 21 R0, 21 / 2 = 10 R1, 10 / 2 = 5 R0, 5 / 2 = 2 R1, 2 / 2 = 1 R0, 1 / 2 = 0 R1. Reading remainders bottom-to-top: 101010. To convert between hex and binary, simply expand each hex digit to 4 bits: 2A = 0010 1010.
Shortcut: Hex to Binary
The most efficient conversion path between hex and binary uses the 4-bit mapping directly. Each hex digit has a fixed 4-bit binary equivalent: 0=0000, 1=0001, 2=0010, 3=0011, 4=0100, 5=0101, 6=0110, 7=0111, 8=1000, 9=1001, A=1010, B=1011, C=1100, D=1101, E=1110, F=1111. Memorizing this table is one of the most useful skills for any programmer working at a low level.
Two's Complement and Signed Integers
Computers represent negative numbers using a system called two's complement. In this system, the most significant bit (MSB) serves as a sign indicator: 0 means positive, 1 means negative. However, the remaining bits do not simply represent the magnitude -- instead, negative values are encoded by inverting all bits and adding 1.
For example, in 8-bit two's complement, +5 is 00000101. To find -5, invert all bits to get 11111010, then add 1 to get 11111011. The range of values for an 8-bit signed integer is -128 to +127, compared to 0 to 255 for unsigned.
The elegance of two's complement is that addition works identically for signed and unsigned numbers. The processor does not need separate circuitry for signed arithmetic -- the same adder handles both cases correctly. This is why two's complement became the universal standard for signed integer representation in virtually all modern computer architectures.
Hexadecimal in Programming
Hexadecimal notation appears throughout software development. Memory addresses are displayed in hex because they compactly represent the underlying binary addresses. A 32-bit address like 0x7FFF5FBF is far more readable than its binary equivalent 01111111 11111111 01011111 10111111. Debuggers, disassemblers, and hex editors all present data in hexadecimal for this reason.
CSS and HTML color codes use hexadecimal: #FF5733 means red=255, green=87, blue=51 -- each pair of hex digits represents one byte (0-255) of color intensity. Byte arrays and binary protocols are commonly logged in hex format. Machine code and assembly language instructions are written in hex. Even UUID identifiers like 550e8400-e29b-41d4-a716-446655440000 are hexadecimal strings.
Octal and Unix File Permissions
In Unix and Linux systems, file permissions are represented as three groups of three bits: owner, group, and others. Each group has three permission flags: read (4), write (2), and execute (1). Since each group's three bits map to exactly one octal digit, permissions are naturally expressed in octal notation.
The permission 755 translates to: owner = 7 (rwx = 111), group = 5 (r-x = 101), others = 5 (r-x = 101). The command chmod 644 file.txt sets owner to read-write (6 = 110), group and others to read-only (4 = 100). This octal representation is concise and directly reflects the underlying binary permission bits.
BigInt and Precision in JavaScript
Standard JavaScript numbers use IEEE 754 double-precision floating point, which safely represents integers up to 253 - 1 (9,007,199,254,740,991). For 64-bit integer arithmetic, this is insufficient. JavaScript's BigInt type, introduced in ES2020, provides arbitrary-precision integer arithmetic, allowing exact representation of 64-bit values and beyond.
Our converter uses BigInt automatically when you select 64-bit mode, ensuring that large values like 18,446,744,073,709,551,615 (the maximum unsigned 64-bit integer) are computed precisely. In regular JavaScript, this value would be rounded due to floating-point limitations. BigInt solves this by storing the number with exact integer precision.
Frequently Asked Questions
How do I convert binary to decimal?
Multiply each binary digit by its positional power of 2, then sum the results. For example, 1011 = (1×8) + (0×4) + (1×2) + (1×1) = 11 in decimal. Our converter shows this step-by-step calculation in real time.
What is hexadecimal used for?
Hexadecimal provides a compact way to represent binary data. Each hex digit maps to exactly 4 binary bits, making it ideal for memory addresses, CSS color codes, byte arrays, and debugging output. Programmers use hex because it is more readable than long binary strings.
What is two's complement?
Two's complement is the standard method computers use to represent signed (positive and negative) integers. The most significant bit indicates the sign. To negate a number, invert all bits and add 1. This system allows the same hardware adder to handle both signed and unsigned arithmetic.
Why is octal used in Unix permissions?
Unix file permissions use three bits per permission group (read, write, execute), and each octal digit represents exactly three bits. This natural alignment makes octal the most concise way to express permission combinations: 7 = rwx, 6 = rw-, 5 = r-x, 4 = r--, and so on.
Can this tool handle large numbers?
Yes. When you select 64-bit mode, the tool automatically uses JavaScript's BigInt type for arbitrary-precision integer arithmetic. This ensures exact results even for the largest 64-bit values.
What is the difference between signed and unsigned?
Unsigned integers represent only non-negative values (0 to 2n-1). Signed integers use two's complement to represent both positive and negative values (-2n-1 to 2n-1-1). For 8 bits: unsigned range is 0-255, signed range is -128 to 127.
How do I convert hexadecimal to binary?
Replace each hex digit with its 4-bit binary equivalent. The mapping is: 0=0000, 1=0001, 2=0010, ..., 9=1001, A=1010, B=1011, C=1100, D=1101, E=1110, F=1111. For example, 3F = 00111111.