User Rating 0.0
Total Usage 0 times
Drop CSV file here or click to browse Supports .csv and .tsv up to 50 MB
Is this tool helpful?

Your feedback helps us improve.

About

Column order in CSV files is rarely arbitrary. ETL pipelines, database imports, and reporting systems expect fields at precise positions. A single misaligned column causes silent data corruption: dates parsed as amounts, IDs mapped to names, categorical flags treated as numeric. This tool parses your CSV file according to RFC 4180 rules, auto-detects the delimiter (d ∈ {, ; \t |}), and lets you reorder columns via drag-and-drop or keyboard controls before rebuilding the output. It handles quoted fields containing embedded delimiters and newlines correctly. The parser runs entirely in your browser. No data leaves your machine.

Limitation: this tool treats the first row as a header by default. Files without headers can be processed by toggling the header option, in which case columns are labeled Col1, Col2, etc. Extremely wide files (> 500 columns) may exhibit slower drag interactions due to DOM rendering. For files exceeding 1 MB, parsing is offloaded to a Web Worker to keep the UI responsive.

csv column reorder csv tool swap csv columns rearrange csv csv editor column order

Formulas

Delimiter auto-detection scores each candidate delimiter d by computing the consistency of field counts across the first n sample rows:

score(d) = 11 + σ(counts) × c

Where σ(counts) is the standard deviation of field counts per row using delimiter d, and c is the mean field count. A perfect delimiter yields σ = 0 (identical column count in every row), maximizing the score. The delimiter with the highest score is selected.

Column remapping applies an index permutation vector P = [p0, p1, …, pk1] to each row R:

R′[i] = R[P[i]]   fori [0, k)

Where k is the total column count and P is the user-defined column order. This runs in O(n × k) time where n is the row count.

Reference Data

DelimiterSymbolCommon UseAuto-DetectedRFC 4180
Comma,Standard CSV exports, spreadsheetsYesYes
Semicolon;European locales (Excel EU)YesNo (extension)
Tab\tTSV files, database dumpsYesNo (extension)
Pipe|Legacy mainframe exportsYesNo (extension)
Double Quote"Field enclosure characterN/AYes
CRLF\r\nRow terminator (Windows)Auto-normalizedYes
LF\nRow terminator (Unix/Mac)Auto-normalizedExtension
Escaped Quote""Literal quote inside quoted fieldHandledYes
BOM (UTF-8)\uFEFFExcel UTF-8 exportsStrippedN/A
Empty Field,,Missing/null valuesPreservedYes
Newline in Field"a\nb"Multi-line cell contentPreservedYes
Trailing Delimitera,b,Some legacy systemsPreservedEdge case
Mixed Quotinga,"b",cPartial quotingHandledYes
UTF-8 ContentUnicodeInternational charactersPreservedExtension
Max Columns500UI rendering limitSoft limitN/A

Frequently Asked Questions

The parser follows RFC 4180: any field enclosed in double quotes is treated as a literal string. If the delimiter (e.g., comma) appears inside quotes, it is not treated as a column separator. Embedded double quotes are expected as escaped pairs (""). This means a field like "New York, NY" remains a single column value.
Rows with fewer fields than the header are padded with empty strings to maintain alignment. Rows with more fields than the header retain the extra fields. The preview table highlights inconsistent rows with a visual indicator. The download preserves all data as-is after reordering the first k columns defined by the header.
No. The tool assumes a single consistent delimiter per file. It samples the first 20 rows and scores candidates (comma, semicolon, tab, pipe) by standard deviation of field counts. If your file genuinely uses mixed delimiters, override the detection manually using the delimiter selector before processing.
Yes. Toggle the "First row is header" option off. Columns will be labeled Col 1, Col 2, etc. The first row of data will be treated as a regular data row, not a header, and will be included in the output as data.
The tool accepts files up to 50 MB. Files larger than 1 MB are parsed in a Web Worker to keep the UI responsive. Browser memory is the practical constraint - a 50 MB CSV with 500 columns may require several hundred megabytes of heap. For extremely large files (> 50 MB), consider command-line tools like csvkit or awk.
No. Cell values are never modified. The tool only changes column positions. The output is encoded as UTF-8 text. If your original file used a BOM (byte order mark), it is stripped on input and not re-added on output. All Unicode characters, including CJK and emoji, are preserved.