Decimal to Two's Complement Converter
Convert decimal numbers to two's complement binary and back. Supports 4, 8, 16, 32, and 64-bit widths with step-by-step breakdown.
About
Two's complement is the standard encoding for signed integers in virtually every modern CPU architecture. A single misinterpreted sign bit propagates through arithmetic pipelines and corrupts downstream results - buffer overflows, incorrect sensor readings, and financial miscalculations all trace back to this error class. This tool converts between decimal integers and their two's complement binary representation across 4, 8, 16, 32, and 64-bit widths. It validates that your value falls within the representable range [â2nâ1, 2nâ1 â 1] and generates a step-by-step breakdown of the conversion process.
The converter handles both directions: decimal to two's complement and two's complement back to decimal. It assumes standard binary positional notation with the most significant bit (MSB) serving as the sign bit. Limitation: this tool operates on fixed-width integers only. Floating-point representations (IEEE 754) require a different encoding scheme not covered here. Pro Tip: when debugging embedded firmware, always verify your compiler's assumed word width matches your target - a 16-bit int on an AVR is not the same as a 32-bit int on ARM Cortex.
Formulas
Two's complement encoding for an n-bit signed integer maps a decimal value d to a binary string B of exactly n bits. The representable range is constrained to:
For non-negative values (d âĨ 0), the binary representation is the standard positional expansion zero-padded to n bits:
For negative values (d < 0), the encoding applies three steps:
The reverse conversion (binary to decimal) inspects the MSB (bit n â 1):
The weighted sum formula expresses the decimal value directly from bit positions:
Where bi is the bit at position i (0 or 1), n is the total bit width, and bnâ1 is the sign bit carrying negative weight.
Reference Data
| Bit Width | Min Value | Max Value | Unsigned Max | Total Values | Common Use |
|---|---|---|---|---|---|
| 4-bit | â8 | 7 | 15 | 16 | Nibble, BCD digits |
| 8-bit | â128 | 127 | 255 | 256 | int8_t, Java byte |
| 16-bit | â32,768 | 32,767 | 65,535 | 65,536 | int16_t, short |
| 32-bit | â2,147,483,648 | 2,147,483,647 | 4,294,967,295 | 4,294,967,296 | int32_t, C int |
| 64-bit | â9,223,372,036,854,775,808 | 9,223,372,036,854,775,807 | 18,446,744,073,709,551,615 | 18,446,744,073,709,551,616 | int64_t, long long |
| Common Two's Complement Values (8-bit) | |||||
| Decimal | Binary | Hex | Notes | ||
| 0 | 0000 0000 | 0x00 | Zero | ||
| 1 | 0000 0001 | 0x01 | Smallest positive | ||
| 127 | 0111 1111 | 0x7F | Largest positive (8-bit) | ||
| â1 | 1111 1111 | 0xFF | All bits set | ||
| â2 | 1111 1110 | 0xFE | |||
| â128 | 1000 0000 | 0x80 | Most negative (8-bit) | ||
| 42 | 0010 1010 | 0x2A | ASCII asterisk | ||
| â42 | 1101 0110 | 0xD6 | |||
| 100 | 0110 0100 | 0x64 | |||
| â100 | 1001 1100 | 0x9C | |||