User Rating 0.0
Total Usage 0 times
Paste directly or upload a .tsv / .txt file below.
Is this tool helpful?

Your feedback helps us improve.

About

Miscounting columns in tab-separated data causes silent failures in database imports, broken ETL pipelines, and corrupted analytics. A single missing tab character shifts every downstream field, producing results that look plausible but are wrong. This tool parses raw TSV input and reports the exact column count per row, flagging inconsistencies where row i has a different field count than row j. It handles edge cases including trailing tabs, empty fields, and BOM-prefixed files. The column count for any row equals ntabs + 1, where ntabs is the number of tab characters on that line.

Note: this tool treats every tab as a delimiter. It does not support quoted fields containing literal tab characters, which is rare in TSV but common in CSV. For files exceeding a few thousand rows, performance remains instantaneous since the algorithm is O(n) in total character count. Pro tip: if your data originated from a spreadsheet export, check for trailing tabs appended to empty trailing columns.

tsv columns tab-separated data analysis csv text parsing column counter

Formulas

The column count for any single row in a TSV file is derived from the number of tab delimiter characters present on that line:

C = ntabs + 1

Where C is the column count for the row and ntabs is the number of U+0009 (horizontal tab) characters found in that row. For the full file consistency check, the tool computes Ci for every row i and reports whether all values are equal:

Consistent C1 = C2 = = Cm

Where m is the total number of non-empty rows. If any Ci C1, the tool flags those rows as inconsistent and reports both the expected count (from row 1) and the actual count.

Reference Data

ScenarioTab Count Per RowColumn CountCommon CauseRisk Level
Standard 3-column TSV23Normal exportNone
Single column (no tabs)01Wrong delimiter or plain textHigh
Trailing tab on every row34Spreadsheet export artifactMedium
Inconsistent row lengthsVariesVariesMissing fields, manual editingCritical
Empty row (blank line)01 or skippedTrailing newline in fileLow
Header has more columns than datann + 1Schema change, column added to header onlyCritical
BOM-prefixed UTF-8 file23Windows Notepad saveLow (if stripped)
Tab inside quoted fieldOvercountedOvercountedCSV-style quoting in TSVMedium
Mixed delimiters (tab + comma)VariesVariesCopy-paste from mixed sourcesHigh
10-column dataset910Standard wide tableNone
100-column dataset99100Genomic/scientific dataNone
Row with only tabsnn + 1All fields emptyMedium
Windows line endings (\r\n)23Cross-platform file transferLow (if handled)
Last row missing newline23Truncated file or streamLow

Frequently Asked Questions

A trailing tab at the end of a row creates an additional empty field. A row containing 3 visible values followed by a tab character will report 4 columns, because the formula is C = ntabs + 1. Many spreadsheet applications append trailing tabs when exporting. Check the per-row breakdown to identify this pattern.
The tool analyzes every non-empty row and compares its column count to the first row (typically the header). Any row where Ci C1 is flagged in the results with its row number and actual count. This is critical for database imports where schema expects a fixed number of fields.
Yes. The parser normalizes both \r\n (Windows/CRLF) and \n (Unix/LF) line endings before splitting into rows. Bare \r (old Mac) is also handled. This prevents false row counts from mixed-platform files.
Completely empty lines (zero characters after trimming) are excluded from column counting by default. A row containing only whitespace (spaces, not tabs) is treated as a single-column row with C = 1. The summary reports how many empty rows were skipped.
If every row reports C = 1 (zero tabs found), the tool displays a warning suggesting the data may be CSV (comma-separated) rather than TSV. It does not auto-detect or switch delimiters, because ambiguous detection causes more errors than it solves.
The tool processes input in the browser's main thread. For typical use cases up to 50MB of text (hundreds of thousands of rows), parsing completes in under 1 second on modern hardware. The practical limit is browser memory. For files larger than 100MB, consider command-line tools like awk.