User Rating 0.0 โ˜…โ˜…โ˜…โ˜…โ˜…
Total Usage 0 times
0 characters
Enter text and click Convert to see results
Is this tool helpful?

Your feedback helps us improve.

โ˜… โ˜… โ˜… โ˜… โ˜…

About

Every character transmitted over a network or stored in memory maps to a numeric code point defined by the ASCII standard (ANSI X3.4-1986) or its Unicode superset. Misinterpreting these mappings causes encoding corruption, broken file parsing, and protocol failures. This tool converts each input character to its integer representation across four bases: decimal (d), hexadecimal (h), octal (o), and binary (b). It handles the full standard ASCII range 0 - 127 and extended Unicode code points up to 65535 (BMP). Note: surrogate pairs for code points above U+FFFF are reported as two separate 16-bit values, not as a single scalar.

ascii integer converter character code decimal hexadecimal binary octal encoding

Formulas

The conversion from a character to its integer code point is a direct lookup operation defined by the encoding standard. For any character c in a string S at position i, the decimal integer value is:

d = charCodeAt(S, i)

Base conversions from the decimal value d follow standard positional notation. For hexadecimal (base 16):

h = d.toString(16)

For octal (base 8):

o = d.toString(8)

For binary (base 2):

b = d.toString(2)

Binary output is zero-padded to the nearest byte boundary (8, 16, or 32 bits) to reflect actual memory representation. The padding width w is calculated as:

w = ceil(log2(d + 1) รท 8) ร— 8

Where d = decimal code point, h = hexadecimal string, o = octal string, b = binary string, w = padding width in bits, S = input string, i = character index.

Reference Data

CharacterNameDecimalHexOctalBinaryCategory
NULNull00x0000000000000Control
TABHorizontal Tab90x0901100001001Control
LFLine Feed100x0A01200001010Control
CRCarriage Return130x0D01500001101Control
SPSpace320x2004000100000Whitespace
!Exclamation Mark330x2104100100001Punctuation
0Digit Zero480x3006000110000Digit
9Digit Nine570x3907100111001Digit
ALatin Capital A650x4110101000001Uppercase
ZLatin Capital Z900x5A13201011010Uppercase
aLatin Small A970x6114101100001Lowercase
zLatin Small Z1220x7A17201111010Lowercase
{Left Curly Bracket1230x7B17301111011Punctuation
}Right Curly Bracket1250x7D17501111101Punctuation
~Tilde1260x7E17601111110Punctuation
DELDelete1270x7F17701111111Control
ยฉCopyright Sign1690xA925110101001Symbol
ยฃPound Sign1630xA324310100011Currency
โ‚ฌEuro Sign83640x20AC202540010000010101100Currency
ฯ€Greek Small Pi9600x03C017000000001111000000Greek
@Commercial At640x4010001000000Punctuation
#Number Sign350x2304300100011Punctuation
\Reverse Solidus920x5C13401011100Punctuation
/Solidus470x2F05700101111Punctuation
&Ampersand380x2604600100110Punctuation
=Equals Sign610x3D07500111101Punctuation

Frequently Asked Questions

ASCII defines 128 characters (code points 0-127) using 7 bits per character. Unicode is a superset that extends this to over 1.1 million code points. The first 128 Unicode code points are identical to ASCII. This tool uses JavaScript's charCodeAt() which returns UTF-16 code units, so all standard ASCII values are preserved exactly while extended characters (like โ‚ฌ at code point 8364) are also supported.
Binary output is zero-padded to the nearest 8-bit boundary to reflect actual byte representation. Standard ASCII characters (0-127) fit in 8 bits. Extended characters (128-255) also use 8 bits. Characters with code points above 255 (e.g., โ‚ฌ = 8364) require 16 bits. This padding matches how systems actually store these values in memory.
Control characters (code points 0-31 and 127) are non-printable. The tool detects them and displays their standard abbreviation (e.g., TAB for code point 9, LF for 10, CR for 13, NUL for 0) instead of rendering an invisible glyph. The integer values are computed identically to printable characters.
Characters above U+FFFF (code point 65535), such as most emoji, are encoded in UTF-16 as surrogate pairs - two 16-bit code units. JavaScript's charCodeAt() returns each surrogate independently, so a single emoji will appear as two entries (high surrogate in range 0xD800-0xDBFF, low surrogate in range 0xDC00-0xDFFF). This is a limitation of the UTF-16 encoding model, not a bug.
Yes. Decimal values can be used directly in most languages (e.g., chr(65) in Python returns 'A'). Hexadecimal values prefixed with 0x work in C, Java, JavaScript, and Python. Octal values prefixed with 0o work in Python 3 and JavaScript strict mode. Binary values prefixed with 0b work in Python, Java 7+, and modern JavaScript.
The compact output concatenates all integer values for the entire input string using the selected delimiter. For example, the text "Hi" with a space delimiter in decimal mode produces "72 105". With a comma delimiter it produces "72,105". This is useful for generating arrays or protocol-compliant byte sequences.