User Rating 0.0
Total Usage 0 times
Input
Drop file here
0 characters · 0 bytes
Output
0 characters · 0 bytes
Original 0 B
Compressed 0 B
Saved 0 B
Ratio 0%
Is this tool helpful?

Your feedback helps us improve.

About

Unminified JavaScript inflates page load time. Every unnecessary byte - comments, whitespace, blank lines - costs latency. A 200KB bundle with verbose formatting can often shrink by 30 - 60% through safe comment and whitespace stripping alone. This tool performs real, string-aware minification directly in your browser. It tokenizes your source to distinguish code from string literals and regex patterns, ensuring content inside quotes or template literals remains untouched. No server round-trip. No data leaves your machine.

Note: this tool performs safe compression (comment removal, whitespace collapsing, blank line elimination). It does not rename variables or perform dead-code elimination - those require a full AST parser like Terser or Closure Compiler. For production builds, pair this with a bundler. For quick ad-hoc minification of scripts, config files, or snippets, this handles the job without installing tooling. Results degrade gracefully on malformed input; syntax errors in source will appear in output unchanged.

javascript minifier compress js minify javascript js compressor reduce js file size javascript optimizer code minification

Formulas

The compression ratio quantifies how much smaller the output is relative to the input:

R = Soriginal ScompressedSoriginal × 100%

Where R = compression ratio (percentage saved), Soriginal = byte size of the input source, and Scompressed = byte size of the minified output. Byte size is computed via new Blob([str]).size to account for multi-byte UTF-8 characters accurately.

The tokenizer operates as a finite state machine cycling through states:

state { CODE, STRING_SINGLE, STRING_DOUBLE, TEMPLATE, REGEX, LINE_COMMENT, BLOCK_COMMENT }

At each character ci, transitions are evaluated in priority order. Escape sequences (\) skip the next character. When in CODE state and encountering //, the machine enters LINE_COMMENT and discards until newline. For /*, it enters BLOCK_COMMENT and discards until */. Quote characters toggle the corresponding string states. Regex detection uses lookbehind heuristics: a / is treated as regex start if preceded by an operator, keyword, or opening bracket.

Reference Data

TechniqueDescriptionTypical SavingsRisk Level
Single-line comment removalStrips // comment outside strings5 - 15%None
Multi-line comment removalStrips /* ... */ blocks outside strings5 - 20%None
Leading/trailing whitespaceRemoves indentation and trailing spaces per line10 - 25%None
Blank line removalEliminates empty lines between statements3 - 10%None
Multiple space collapsingReduces consecutive spaces to single space2 - 5%None
Newline collapsingJoins statements onto fewer lines where safe5 - 15%Low
Semicolon normalizationEnsures semicolons before newline removal0 - 2%Low
Variable manglingRenames local vars to short names (a, b, c)10 - 30%High (requires AST)
Dead code eliminationRemoves unreachable branches0 - 20%High (requires AST)
Property manglingShortens object property names5 - 15%Very High
Gzip compression (server)Transfer encoding, not code transform60 - 80%None (orthogonal)
Brotli compression (server)Better ratio than Gzip for text assets65 - 85%None (orthogonal)
Tree shakingRemoves unused ES module exports10 - 50%Medium (bundler)
Scope hoistingFlattens module wrappers2 - 5%Low (bundler)
Console.log removalStrips debug logging statements1 - 5%Low

Frequently Asked Questions

No. The tokenizer tracks parser state across single-quoted strings, double-quoted strings, template literals (backticks), and regex literals. It only strips comments and whitespace that occur in CODE state. Content inside any quoted context passes through unchanged. However, extremely unusual regex patterns with unescaped characters may rarely cause issues - always test minified output before deploying.
Variable renaming (mangling) requires constructing a full Abstract Syntax Tree (AST) to understand scope chains, closures, and reference relationships. A regex/tokenizer approach cannot safely determine which identifiers are local vs. global. Incorrect mangling breaks code silently. This tool prioritizes safety: it performs only transformations guaranteed to preserve semantics - comment removal, whitespace stripping, and blank line elimination.
Byte sizes are computed using the Blob API: new Blob([text]).size returns the UTF-8 encoded byte count. This is the actual transfer size before any server-side compression (Gzip/Brotli). A file containing only ASCII characters will show 1 byte per character. Files with Unicode (emoji, CJK) will show higher byte counts than character counts.
Partially. The comment and whitespace removal logic works on any text that uses JavaScript-style comments (// and /* */). TypeScript type annotations and JSX tags will not be stripped or transformed - they will pass through as-is. The output will still be valid TypeScript/JSX with comments and whitespace removed. For full TS/JSX compilation, use tsc or Babel respectively.
This tool does not generate source maps. The output is a flat minified file. If you need source map support for production debugging, use a build tool like Terser with the --source-map flag. This tool is designed for quick, ad-hoc minification where source maps are unnecessary - config scripts, embedded snippets, or prototyping.
Yes. When enabled, it removes all console method calls: console.log, console.warn, console.error, console.info, console.debug, and console.trace. The regex matches "console." followed by any method name and its argument parentheses. Nested parentheses within arguments are handled up to 3 levels deep. If you need to keep error logging, disable this option.