User Rating 0.0 โ˜…โ˜…โ˜…โ˜…โ˜…
Total Usage 0 times
0 bits
Presets:
Is this tool helpful?

Your feedback helps us improve.

โ˜… โ˜… โ˜… โ˜… โ˜…

About

IEEE 754 is the universal standard governing floating-point arithmetic in virtually all modern processors. A single misinterpreted bit in the exponent field shifts the decoded value by orders of magnitude. This converter decodes raw binary strings into their decimal floating-point equivalents for both 32-bit single precision (bias 127, 8-bit exponent, 23-bit mantissa) and 64-bit double precision (bias 1023, 11-bit exponent, 52-bit mantissa). It correctly handles all five IEEE 754 classes: normalized numbers, denormalized (subnormal) numbers, positive and negative zero, infinities, and NaN (Not a Number).

The tool assumes the input is a complete IEEE 754 bit pattern. It does not convert arbitrary binary integers or binary fractions into floats. If you supply 32 bits, single precision is assumed. If you supply 64 bits, double precision is assumed. Partial inputs are zero-padded on the right. Approximation error inherent to IEEE 754 representation is not introduced by this tool. The decoded decimal value is the exact value the bit pattern encodes per the standard.

binary to float IEEE 754 converter floating point decoder binary number converter single precision double precision mantissa calculator

Formulas

The IEEE 754 standard encodes a floating-point number using three fields: a sign bit s, a biased exponent e, and a mantissa (significand) m. The general decoding formula for normalized numbers is:

value = (โˆ’1)s ร— 2(e โˆ’ bias) ร— (1 + Nโˆ‘i=1 bi ร— 2โˆ’i)

For denormalized (subnormal) numbers where all exponent bits are zero:

value = (โˆ’1)s ร— 2(1 โˆ’ bias) ร— (0 + Nโˆ‘i=1 bi ร— 2โˆ’i)

Where s is the sign bit (0 = positive, 1 = negative). e is the unsigned integer value of the exponent field. bias is 127 for 32-bit or 1023 for 64-bit. bi is the i-th bit of the mantissa field. N is the number of mantissa bits (23 for single, 52 for double). The implicit leading 1 (normalized) or 0 (denormalized) is the "hidden bit" convention that grants one extra bit of precision without storage cost.

Reference Data

CategoryExponent BitsMantissa BitsConditionDecoded Value
Positive ZeroAll 0All 0s = 0+0.0
Negative ZeroAll 0All 0s = 1โˆ’0.0
Denormalized (Subnormal)All 0โ‰  0Implicit leading 0(โˆ’1)s ร— 21โˆ’bias ร— 0.mantissa
Normalized0 < e < maxAnyImplicit leading 1(โˆ’1)s ร— 2eโˆ’bias ร— 1.mantissa
Positive InfinityAll 1All 0s = 0+โˆž
Negative InfinityAll 1All 0s = 1โˆ’โˆž
Signaling NaNAll 1MSB = 0, rest โ‰  0Raises exceptionNaN (signaling)
Quiet NaNAll 1MSB = 1Propagates silentlyNaN (quiet)
Single Precision (float32)8 bits [30:23]23 bits [22:0]Bias = 127Range: ยฑ3.4028235 ร— 1038
Double Precision (float64)11 bits [62:52]52 bits [51:0]Bias = 1023Range: ยฑ1.7976931 ร— 10308
Smallest Positive Normal (f32)00000001All 0e = 11.17549435 ร— 10โˆ’38
Largest Positive Normal (f32)11111110All 1e = 2543.40282347 ร— 1038
Smallest Positive Subnormal (f32)All 0000...001Denormalized1.40129846 ร— 10โˆ’45
Machine Epsilon (f32)N/AN/A2โˆ’231.19209290 ร— 10โˆ’7
Machine Epsilon (f64)N/AN/A2โˆ’522.22044605 ร— 10โˆ’16
Smallest Positive Subnormal (f64)All 0000...001Denormalized4.94065646 ร— 10โˆ’324

Frequently Asked Questions

When every exponent bit is 0 but the mantissa is non-zero, the number is denormalized (subnormal). The implicit leading bit becomes 0 instead of 1, and the exponent is fixed at 1 โˆ’ bias (i.e., โˆ’126 for single precision). This allows representation of values smaller than the minimum normalized float, filling the gap between zero and the smallest normal number.
Both have all exponent bits set to 1 with a non-zero mantissa. The distinction is in the most significant bit of the mantissa field. If that bit is 1, the NaN is quiet (qNaN) and propagates through arithmetic without raising exceptions. If it is 0 (with at least one other mantissa bit set), it is a signaling NaN (sNaN) that triggers an invalid-operation exception when used in computation. Note that this convention (defined in IEEE 754-2008) is followed by x86 and ARM but historically some MIPS processors used the opposite convention.
Negative zero (1 followed by all zeros) arises naturally from operations like dividing a negative number by positive infinity, or rounding a tiny negative value toward zero. Arithmetically +0 = โˆ’0 evaluates to true, but they are distinct bit patterns. The sign is preserved to maintain correct branch behavior in functions like atan2 and to ensure 1 รท โˆ’0 = โˆ’โˆž while 1 รท +0 = +โˆž.
Yes. If you input fewer than 32 bits, the converter zero-pads on the right to reach 32 bits and decodes as single precision. If you input between 33 and 64 bits, it zero-pads to 64 bits and decodes as double precision. You can also override the precision mode manually. Inputs longer than 64 bits are truncated to 64.
The converter uses JavaScript's native 64-bit double-precision arithmetic for the final calculation, which provides approximately 15 - 17 significant decimal digits. For single-precision inputs, this exceeds the 7 - 8 digits of significance in float32, so no precision is lost in decoding. For double-precision inputs, the output is the exact round-trip decimal representation of the bit pattern. The tool displays up to 20 significant digits.
This tool interprets the binary string in big-endian (network byte order) format: the leftmost bit is the sign bit, followed by the exponent, then the mantissa. If your source data is in little-endian byte order (common in x86 memory dumps), you must reverse the byte order before input. For example, a 32-bit value stored as bytes B3 B2 B1 B0 in little-endian must be entered as B0 B1 B2 B3 in binary.