User Rating 0.0
Total Usage 0 times
Drop log file here or browse .log, .txt, .out — up to 50 MB
or paste log text
Is this tool helpful?

Your feedback helps us improve.

About

Raw server logs are notoriously difficult to read. A single misread ERROR line buried among 50,000 INFO entries can mean hours of wasted debugging or, worse, a missed production incident. This tool parses plain-text log files, classifies each line by severity level (FATAL, ERROR, WARN, INFO, DEBUG, TRACE), and renders them as a color-coded, searchable, filterable HTML document. It handles common formats including Log4j, syslog, and ISO-8601 timestamped output. Multi-line stack traces are grouped with their parent entry.

The parser operates entirely in your browser. No data leaves your machine. This tool approximates format detection using heuristic pattern matching. It will not correctly parse binary log formats or logs with no recognizable delimiter structure. For logs exceeding 100,000 lines, processing is chunked to prevent browser tab freezing. The converted HTML output can be downloaded as a standalone .html file or copied to clipboard for embedding in incident reports.

log file converter log to html log viewer log parser pretty print logs log file formatter server log viewer

Formulas

The parser applies a cascading detection algorithm. Each line is tested against format patterns in priority order. The first match determines the parser used for the entire file.

classify(line)
{
FATAL if line matches /\bFATAL\b/iERROR if line matches /\bERROR\b/iWARN if line matches /\bWARN(ING)?\b/iINFO if line matches /\bINFO\b/iDEBUG if line matches /\bDEBUG\b/iTRACE if line matches /\bTRACE\b/iUNKNOWN otherwise

Where line is the raw text content of a single log entry. Multi-line detection groups continuation lines (stack traces) with the preceding classified entry by checking if a line begins with whitespace, at , Caused by, or ... followed by a number.

Timestamp extraction uses the pattern T = match(line, regexformat) where regexformat is selected during the initial format detection pass over the first 20 lines of the file.

Reference Data

Log FormatPattern ExampleTimestamp FormatDetection Method
Log4j / Log4j22024-01-15 10:30:45,123 ERROR [main] com.app.Service - Failedyyyy-MM-dd HH:mm:ss,SSSRegExp: date + level + bracket logger
Syslog (RFC 3164)Jan 15 10:30:45 server01 sshd[1234]: Failed passwordMMM dd HH:mm:ssMonth-name prefix + hostname
Syslog (RFC 5424)<134>1 2024-01-15T10:30:45.123Z host app - - msgISO-8601Priority prefix <N>
ISO-8601 Generic2024-01-15T10:30:45.123Z [ERROR] message hereISO-8601 with T separatorISO timestamp + bracket level
Apache Access Log127.0.0.1 - - [15/Jan/2024:10:30:45 +0000] "GET /" 200CLF formatIP prefix + square-bracket date
Apache Error Log[Mon Jan 15 10:30:45.123 2024] [error] [pid 1234] msgDay-of-week prefixDouble square-bracket pattern
Nginx Error2024/01/15 10:30:45 [error] 1234#0: *5 msgyyyy/MM/dd HH:mm:ssSlash-separated date + bracket level
Python LoggingERROR:root:Something failedNone (level-first)LEVEL:logger:message pattern
Spring Boot2024-01-15 10:30:45.123 INFO 1234 --- [main] c.a.App : Startedyyyy-MM-dd HH:mm:ss.SSSTriple dash separator ---
Docker / Kubernetes{"log":"msg\n","stream":"stderr","time":"2024-01-15T10:30:45Z"}ISO-8601 in JSONJSON object with log key
Windows EventInformation 1/15/2024 10:30:45 AM Application Event 1000US date formatLevel-first + US date
Custom / UnknownAny line containing ERROR, WARN, etc.Heuristic scanKeyword search fallback

Frequently Asked Questions

The parser recognizes Log4j/Log4j2, Syslog (RFC 3164 and RFC 5424), Apache access and error logs, Nginx error logs, Python logging output, Spring Boot default format, Docker/Kubernetes JSON logs, and Windows Event format. For unrecognized formats, it falls back to keyword-based severity detection by scanning each line for FATAL, ERROR, WARN, INFO, DEBUG, or TRACE tokens.
Lines that begin with whitespace, the token "at " (Java stack frames), "Caused by:", or "... N more" are grouped with the preceding log entry. They appear as a collapsible block beneath the parent entry in the HTML output. This grouping is heuristic and may misattribute lines in logs where unrelated content is indented.
The tool processes files up to approximately 50 MB in the browser. Files exceeding 50,000 lines are processed in chunks of 5,000 lines using asynchronous batching to prevent UI thread blocking. A progress bar indicates processing status. Browser memory limits (typically 1-2 GB per tab) are the hard constraint. Files with lines exceeding 10,000 characters per line may cause rendering slowdowns.
No. All parsing and rendering occurs locally in your browser using the FileReader API and DOM manipulation. No network requests are made. The log content never leaves your machine. This makes it suitable for processing logs containing sensitive information such as IP addresses, authentication tokens, or PII.
Yes. The rendered output includes toggle buttons for each severity level (FATAL, ERROR, WARN, INFO, DEBUG, TRACE, UNKNOWN). You can enable or disable any combination. The search bar performs full-text filtering across all visible entries. Both filters work in combination: search applies only within the currently visible severity levels.
The download produces a self-contained .html file with embedded CSS. It includes the color-coded log table, line numbers, and severity labels. It does not include the interactive filter or search features - it is a static snapshot. The file can be opened in any browser, attached to incident reports, or printed. Print styles optimize for A4 paper.