Analyze Hex Numbers
Analyze hexadecimal numbers: convert to decimal, binary, octal, ASCII, IEEE 754 float, detect colors, compute bit properties, checksums, and endianness.
About
Hexadecimal notation compresses binary data by a factor of 4:1. A single misread nibble in a memory dump, register value, or color code propagates errors silently - wrong pixel colors, corrupted checksums, or misinterpreted signed integers via two's complement. This tool parses any hex string and extracts its full numeric identity: decimal (d), binary (b), octal (o), ASCII text, IEEE 754 floating-point, bit-level properties (popcount, MSB, LSB, palindrome), byte-order reversal, and XOR/sum checksums. It auto-detects CSS color values and decodes RGBA with HSL and WCAG relative luminance L.
The analyzer handles arbitrary-precision integers via BigInt, so it will not silently truncate values beyond 253 − 1. For 8-digit and 16-digit inputs, IEEE 754 single and double precision floats are decoded using typed arrays. Limitation: ASCII decoding assumes each byte pair maps to a printable character; non-printable bytes are shown as dot placeholders. Two's complement is computed for standard widths (8, 16, 32, 64 bits) only.
Formulas
Hexadecimal to decimal conversion expands each digit by its positional weight in base 16:
where hi is the integer value of the i-th hex digit (from the right, zero-indexed) and n is the total number of digits.
Binary conversion uses a nibble lookup. Each hex digit maps to exactly 4 binary bits:
Popcount (number of set bits) uses Kernighan's algorithm, which clears the lowest set bit per iteration:
WCAG relative luminance for detected hex colors:
where each linearized channel Clin = {
IEEE 754 single-precision (32-bit) float decoding: sign bit s (bit 31), exponent e (bits 30-23), mantissa m (bits 22-0):
Two's complement for an n-bit signed integer where the MSB is 1:
Reference Data
| Hex Digit | Decimal | Binary | Octal | BCD | ASCII (if +0x30) |
|---|---|---|---|---|---|
| 0 | 0 | 0000 | 0 | 0000 | 0 (0x30) |
| 1 | 1 | 0001 | 1 | 0001 | 1 (0x31) |
| 2 | 2 | 0010 | 2 | 0010 | 2 (0x32) |
| 3 | 3 | 0011 | 3 | 0011 | 3 (0x33) |
| 4 | 4 | 0100 | 4 | 0100 | 4 (0x34) |
| 5 | 5 | 0101 | 5 | 0101 | 5 (0x35) |
| 6 | 6 | 0110 | 6 | 0110 | 6 (0x36) |
| 7 | 7 | 0111 | 7 | 0111 | 7 (0x37) |
| 8 | 8 | 1000 | 10 | 1000 | 8 (0x38) |
| 9 | 9 | 1001 | 11 | 1001 | 9 (0x39) |
| A | 10 | 1010 | 12 | - | : (0x3A) |
| B | 11 | 1011 | 13 | - | ; (0x3B) |
| C | 12 | 1100 | 14 | - | < (0x3C) |
| D | 13 | 1101 | 15 | - | = (0x3D) |
| E | 14 | 1110 | 16 | - | > (0x3E) |
| F | 15 | 1111 | 17 | - | ? (0x3F) |
| Common Hex Patterns | |||||
| FF | 255 | 11111111 | 377 | - | Max unsigned byte |
| 7F | 127 | 01111111 | 177 | - | Max signed byte |
| 80 | 128 | 10000000 | 200 | - | Min signed byte (−128) |
| FFFF | 65535 | 16 ones | 177777 | - | Max uint16 |
| DEAD | 57005 | 1101111010101101 | 136655 | - | Debug marker |
| BEEF | 48879 | 1011111011101111 | 137357 | - | Debug marker |
| CAFE | 51966 | 1100101011111110 | 145376 | - | Java class magic |
| FFFFFFFF | 4294967295 | 32 ones | - | - | Max uint32 |
| 7FFFFFFF | 2147483647 | 0 + 31 ones | - | - | Max int32 |
| 3F800000 | 1065353216 | - | - | - | IEEE 754 float: 1.0 |
| 40490FDB | 1078530011 | - | - | - | IEEE 754 float: π |
| 7F800000 | 2139095040 | - | - | - | IEEE 754 float: +∞ |