User Rating 0.0
Total Usage 0 times
Quick Presets:
Is this tool helpful?

Your feedback helps us improve.

About

Every character rendered on a screen maps to a numeric code point defined by encoding standards. ASCII (American Standard Code for Information Interchange) assigns integers 0 - 127 to control characters, digits, Latin letters, and punctuation. Misreading a single code point turns 72 101 108 108 111 into garbage instead of "Hello." This tool parses sequences of decimal code values separated by spaces, commas, or newlines, validates each against the range 0 - 1114111 (full Unicode), and maps them to their corresponding characters via String.fromCodePoint. It flags out-of-range or non-numeric tokens individually so you can fix errors without losing the rest of the conversion.

Common use cases include decoding obfuscated log output, reversing numeric payloads in CTF challenges, and restoring data from systems that export character codes instead of raw text. The tool assumes decimal input by default. Note: control characters in the range 0 - 31 (except 9, 10, 13) produce non-printable output and are displayed as placeholder symbols.

ascii string converter character codes decimal to text ascii table unicode converter code points

Formulas

The conversion from a decimal ASCII code to its character representation uses the code point mapping function:

char = fromCodePoint(n)

where n is an integer satisfying 0 n 1114111 (the maximum Unicode code point, 0x10FFFF). For strict ASCII, the valid domain is 0 n 127.

When converting a full string back to ASCII codes, each character is mapped by the inverse function:

n = codePointAt(char, 0)

The delimiter auto-detection algorithm applies the following priority ruleset:

{
comma if input contains ,newline if input contains \nspace otherwise (default)

where char is any valid Unicode character, and n is its corresponding decimal code point value.

Reference Data

DecHexCharDescription
000NULNull
707BELBell / Alert
909TABHorizontal Tab
100ALFLine Feed (newline)
130DCRCarriage Return
271BESCEscape
3220SPSpace
3321!Exclamation Mark
3422"Double Quote
3523#Hash / Number Sign
3927'Apostrophe
4028(Left Parenthesis
422A*Asterisk
432B+Plus Sign
442C,Comma
452D-Hyphen / Minus
462E.Period / Full Stop
472F/Forward Slash
48 - 5730-390-9Digits
583A:Colon
593B;Semicolon
603C<Less Than
613D=Equals Sign
623E>Greater Than
633F?Question Mark
6440@At Sign
65 - 9041-5AA - ZUppercase Latin Letters
915B[Left Square Bracket
925C\Backslash
935D]Right Square Bracket
955F_Underscore
97 - 12261-7Aa - zLowercase Latin Letters
1237B{Left Curly Brace
1247C|Vertical Bar / Pipe
1257D}Right Curly Brace
1267E~Tilde
1277FDELDelete

Frequently Asked Questions

Standard ASCII covers code points 0 - 127. This tool also supports Extended ASCII (128 - 255) and full Unicode up to 1114111 (0x10FFFF). You can toggle strict ASCII mode to reject values above 127.
The parser checks the input for commas first. If found, it splits on commas. If no commas exist, it checks for newline characters. If neither is present, it defaults to splitting on whitespace (spaces and tabs). This covers the three most common formats: "72 101 108", "72,101,108", and values on separate lines.
Code points 0 - 31 and 127 are control characters (NUL, BEL, ESC, DEL, etc.). They have no visible glyph. The tool displays a placeholder symbol (␀-series) for these so you can identify them. Only TAB (9), LF (10), and CR (13) produce recognizable whitespace.
Yes. The reverse mode (String → ASCII) handles any Unicode character including emoji, CJK ideographs, and combining marks. Note that emoji like 😀 have code points above 65535 and occupy surrogate pairs in UTF-16. The tool uses codePointAt which correctly resolves these to single code point values.
Each token is parsed individually with parseInt. If a token is not a valid integer (e.g., "abc", "12.5.3", empty string), it is flagged as an error in the output with its position index. Valid tokens in the same input are still converted normally, so one bad value does not break the entire batch.
By default, input is interpreted as decimal (base 10). You can select Hexadecimal (base 16) or Octal (base 8) input mode from the settings. Hex values like "4A 6F" and octal values like "112 157" are converted to their decimal equivalents before character mapping.