Text to Binary Converter
Convert any text string into its binary representation using ASCII, UTF-8, or UTF-16 encoding. Supports bidirectional conversion, multiple output formats, character breakdown tables, and emoji handling.
Free online tools to convert between binary, text, decimal, hexadecimal, and octal. Encode data with Base64, generate cryptographic hashes, and translate Morse code -- all in your browser.
Photo by Markus Spiske on Unsplash
The binary number system is the mathematical foundation upon which all modern digital technology is built. Unlike the decimal system that humans use in everyday life, which employs ten digits (0 through 9), binary uses only two digits: 0 and 1. Each digit in a binary number is called a bit, short for "binary digit," and represents one of two possible states. This simplicity is precisely what makes binary so powerful in computing: it maps directly to the on/off states of electronic transistors inside processors.
In binary, the value of each position doubles as you move from right to left. The rightmost position represents 20 (which is 1), the next position represents 21 (which is 2), then 22 (which is 4), 23 (which is 8), and so on. To convert a binary number to decimal, you multiply each digit by its positional value and sum the results. For example, the binary number 1101 equals (1 × 8) + (1 × 4) + (0 × 2) + (1 × 1) = 13 in decimal.
Understanding binary is not merely an academic exercise. Every software developer encounters binary concepts when working with bitwise operations, data encoding, network protocols, file formats, and memory management. The ability to think in binary and convert between number systems is a foundational skill that separates surface-level programmers from engineers who truly understand how their code interacts with hardware.
The choice of two digits is rooted in physics and engineering. Electronic circuits are most reliable when they distinguish between two clearly separated voltage levels rather than ten. A transistor is either conducting current or it is not. A capacitor is either charged or discharged. A magnetic domain is oriented north or south. By using binary, engineers maximize the noise margin between states, making digital systems extraordinarily reliable even when operating at billions of switches per second.
Claude Shannon proved mathematically in 1948 that any information can be represented using binary digits, regardless of its original form. His information theory established the bit as the universal unit of information, showing that text, images, sound, and video can all be faithfully encoded as sequences of ones and zeros. This theoretical insight, combined with the practical advantages of two-state electronics, cemented binary as the language of the digital age.
The concept of binary representation predates electronic computers by millennia. The ancient Indian scholar Pingala, writing around the 2nd century BC, used a binary-like system to classify poetic meters in his work Chandahshastra. His system assigned short and long syllables values analogous to 0 and 1, creating a combinatorial framework that anticipates modern binary counting.
Gottfried Wilhelm Leibniz (1646-1716) is credited with developing the modern binary number system. In 1679, he outlined a complete binary arithmetic in his manuscript "De Progressione Dyadica," and in 1703, he published "Explication de l'Arithmétique Binaire," which demonstrated how all arithmetic operations could be performed using only 0 and 1. Leibniz was fascinated by the philosophical elegance of binary, connecting it to his concept of creation from nothing (0 representing void, 1 representing God).
George Boole (1815-1864) transformed binary from a number system into a logical calculus. His 1854 work "An Investigation of the Laws of Thought" introduced Boolean algebra, a system of logic where propositions are either true or false and can be combined using AND, OR, and NOT operations. Boolean algebra became the mathematical language for designing digital circuits, where true and false correspond directly to 1 and 0.
Claude Shannon (1916-2001), often called the father of information theory, bridged the gap between abstract mathematics and practical engineering. In his landmark 1937 master's thesis at MIT, Shannon demonstrated that Boolean algebra could model the behavior of electrical switching circuits. This insight meant that any logical or mathematical operation expressible in Boolean algebra could be implemented in hardware using switches (and later transistors). His 1948 paper "A Mathematical Theory of Communication" established information theory and defined the bit as the fundamental unit of information.
The first electronic digital computers of the 1940s -- ENIAC, Colossus, and the Manchester Baby -- implemented binary arithmetic using vacuum tubes as switches. The invention of the transistor in 1947 at Bell Labs made binary computation smaller, faster, and more reliable. The integrated circuit (1958) placed multiple transistors on a single chip, and Moore's Law drove exponential increases in transistor density. Modern processors contain billions of transistors, each one a binary switch operating billions of times per second, all built on the mathematical foundations laid by Leibniz, Boole, and Shannon.
At the lowest level, a computer processor is a vast network of transistors organized into logic gates. Each logic gate performs a simple binary operation: AND gates output 1 only when both inputs are 1; OR gates output 1 when at least one input is 1; NOT gates invert the input. By combining these gates into increasingly complex circuits, engineers build adders, multiplexers, registers, arithmetic logic units (ALUs), and ultimately complete processors capable of executing billions of instructions per second.
When you write code in a high-level language like JavaScript or Python, a compiler or interpreter translates your instructions into machine code -- a sequence of binary numbers that the processor understands directly. Each instruction is a specific pattern of bits that tells the processor what operation to perform (add, subtract, compare, jump) and what data to operate on. The instruction set architecture (ISA) defines the binary encoding of every possible instruction, creating a precise interface between software and hardware.
All data in a computer, without exception, is stored as binary. Text characters are mapped to binary values using encoding standards like ASCII (7-bit codes for 128 characters) or Unicode UTF-8 (variable-length encoding supporting over 149,000 characters). Images are stored as grids of pixels, where each pixel's color is defined by binary values for red, green, and blue channels. Audio is sampled at regular intervals (typically 44,100 times per second for CD quality), and each sample's amplitude is stored as a binary number. Video combines image and audio data with timing information, all in binary.
Computer memory (RAM) is organized as a linear array of bytes, each identified by a binary address. A 32-bit address space can reference 232 = 4,294,967,296 bytes (4 GB) of memory, while a 64-bit address space can theoretically address 264 bytes (16 exabytes). When a program accesses a variable, the processor uses the variable's binary memory address to read or write the corresponding bytes. Pointers in languages like C are simply binary memory addresses, which is why understanding binary representation is essential for systems programming.
While binary is the native language of computers, several other number systems play important roles in computing and programming. Each system uses a different base (radix), which determines how many unique digits it employs and how positional values increase.
| System | Base | Digits | Example (decimal 255) | Use Case |
|---|---|---|---|---|
| Binary | 2 | 0-1 | 11111111 | Hardware, logic gates |
| Octal | 8 | 0-7 | 377 | Unix permissions |
| Decimal | 10 | 0-9 | 255 | Human-readable values |
| Hexadecimal | 16 | 0-9, A-F | FF | Memory, colors, debugging |
Hexadecimal (base-16) is particularly important in programming because each hex digit maps to exactly four binary digits (bits). This means a byte (8 bits) can be represented by exactly two hex digits, making hexadecimal a compact and readable way to express binary data. Memory addresses, color codes (#FF5733), MAC addresses, and debugging output all commonly use hexadecimal notation.
Octal (base-8) maps each digit to three binary digits. While less common than hexadecimal in modern programming, octal remains significant in Unix/Linux systems for representing file permissions. The permission 755, for example, means 111 101 101 in binary: read-write-execute for the owner, read-execute for the group, and read-execute for others.
Understanding the relationships between these number systems -- and being able to convert fluently between them -- is a core competency for software developers, network engineers, security researchers, and anyone who works close to the hardware layer. The tools on this page make these conversions instant and error-free.
Our suite of binary converters covers every common conversion you might need. Each tool runs entirely in your browser with no data sent to any server. Select any converter below to get started.
Convert any text string into its binary representation using ASCII, UTF-8, or UTF-16 encoding. Supports bidirectional conversion, multiple output formats, character breakdown tables, and emoji handling.
Convert between binary, decimal, hexadecimal, and octal in real time. Type in any field and the others update instantly. Supports 8/16/32/64-bit widths, signed/unsigned modes, and BigInt for large numbers.
Encode text or files to Base64, decode Base64 strings back to text, and generate data URIs. Supports drag-and-drop file upload, image preview, URL-safe Base64 variant, and clipboard operations.
Generate cryptographic hashes using SHA-256, SHA-384, SHA-512, or SHA-1. Hash text or files directly in your browser using the Web Crypto API. Includes hash comparison tool for integrity verification.
Translate text to Morse code and back with audio playback using the Web Audio API. Adjustable speed (5-30 WPM), visual light signal, and complete international Morse code reference table.
The bit is the atomic unit of digital information. It can hold exactly one of two values: 0 or 1. While a single bit carries minimal information, bits combine to represent increasingly complex data. Eight bits form a byte, which can represent 28 = 256 distinct values (0 to 255 unsigned, or -128 to 127 signed using two's complement). A byte is sufficient to represent any ASCII character, a single pixel channel value, or a small integer.
Common data sizes in computing include the word, which historically varied by architecture but is typically 32 or 64 bits in modern systems. A 32-bit integer can represent values from 0 to 4,294,967,295 (unsigned) or -2,147,483,648 to 2,147,483,647 (signed). A 64-bit integer extends this range enormously, handling values up to 18,446,744,073,709,551,615 unsigned.
Representing negative numbers in binary requires a convention. The most widely used system is two's complement, where the most significant bit (MSB) indicates the sign: 0 for positive, 1 for negative. To negate a number in two's complement, you invert all bits and add 1. For example, +5 in 8-bit binary is 00000101; inverting gives 11111010, adding 1 gives 11111011, which represents -5. This elegant system allows addition and subtraction to use the same circuitry, simplifying processor design.
Real numbers (decimals) are represented using the IEEE 754 floating-point standard. A 32-bit float uses 1 bit for the sign, 8 bits for the exponent, and 23 bits for the mantissa (significand). A 64-bit double uses 1 bit for the sign, 11 bits for the exponent, and 52 bits for the mantissa. This binary representation explains why certain decimal fractions like 0.1 cannot be represented exactly in floating-point, leading to the well-known precision issues that programmers must account for in financial and scientific calculations.
Binary number systems are not merely theoretical constructs. They have direct, practical applications across every domain of technology.