User Rating 0.0
Total Usage 0 times
Presets:
[
]
Is this tool helpful?

Your feedback helps us improve.

About

The condition number κ(A) quantifies how sensitive the solution of a linear system Ax = b is to perturbations in the input data. A matrix with κ 1 is well-conditioned. A matrix with κ > 106 is ill-conditioned. Solving ill-conditioned systems without awareness of κ leads to solutions dominated by rounding error. Finite element analysis, GPS trilateration, and regression all fail silently when the underlying matrix is near-singular. This calculator computes κ via explicit inversion using Gauss-Jordan elimination with partial pivoting.

The tool supports three norm types: 1-norm (max absolute column sum), -norm (max absolute row sum), and Frobenius norm. Note: the computed condition number assumes exact arithmetic. For matrices larger than roughly 10×10, dedicated libraries with iterative refinement (LAPACK, SciPy) are more reliable. For n 10, direct inversion is adequate.

condition number matrix calculator matrix norm matrix inverse linear algebra numerical analysis ill-conditioned matrix

Formulas

The condition number of a nonsingular matrix A with respect to a given matrix norm is defined as:

κ(A) = A A−1

The three supported norms are computed as follows:

A1 = nmaxj=1 mi=1 |aij|
A = mmaxi=1 nj=1 |aij|
AF = mi=1 nj=1 |aij|2

Where A is the input matrix of dimensions m × n (must be square for condition number). A−1 is the matrix inverse computed via Gauss-Jordan elimination with partial pivoting. The number of digits of accuracy lost in solving Ax = b is approximately log10(κ(A)). If κ(A) = , the matrix is singular.

Reference Data

Matrix TypeTypical κ(A)ConditionDigits LostNotes
Identity I1Perfect0Baseline reference
Orthogonal Q1Perfect0QTQ = I
Diagonal (balanced)1 - 10Well0 - 1Ratio of max/min diagonal entry
Tridiagonal (SPD)10 - 103Moderate1 - 3Common in FEM discretizations
Vandermonde (n=5)103 - 105Poor3 - 5Polynomial interpolation matrices
Hilbert (n=5) 4.8 × 105Very poor5 - 6Hij = 1÷(i+j1)
Hilbert (n=10) 1.6 × 1013Catastrophic13Classic ill-conditioned benchmark
Random (uniform)10 - 103Moderate1 - 3Varies with realization
Near-singular> 1015SingularAlldet(A) 0
Pascal (n=6) 8.5 × 107Poor7 - 8Binomial coefficient matrix
Frank (n=6) 1.7 × 104Moderate-poor4Upper Hessenberg test matrix
Cauchy (n=5)104 - 106Poor4 - 6Cij = 1÷(xi+yj)
Toeplitz (symmetric)10 - 104Varies1 - 4Signal processing applications

Frequently Asked Questions

A condition number κ(A) of 10^k means you lose approximately k digits of precision when solving Ax = b. With IEEE 754 double precision (~15.9 significant digits), a matrix with κ ≈ 10^10 leaves only ~6 reliable digits in the solution. If κ exceeds ~10^15, the computed solution is essentially meaningless noise.
Different norms measure matrix "size" differently. The 1-norm emphasizes column structure, the ∞-norm emphasizes row structure, and the Frobenius norm treats all entries equally. For a given matrix, κ₁(A), κ∞(A), and κ_F(A) can differ by up to a factor of n (the matrix dimension). However, they always agree on the order of magnitude for well-conditioned vs. ill-conditioned classification. The 2-norm condition number (ratio of largest to smallest singular value) is the most geometrically meaningful but requires SVD, which this tool does not implement.
During Gauss-Jordan elimination, dividing by a near-zero pivot amplifies rounding errors catastrophically. Partial pivoting swaps rows so the largest available entry (in absolute value) becomes the pivot. This bounds the growth factor of the entries during elimination, keeping intermediate values within a numerically stable range. Without pivoting, even well-conditioned matrices can produce inaccurate inverses.
The standard condition number κ(A) = ‖A‖·‖A⁻¹‖ requires A to be square and invertible. For rectangular matrices, the generalized condition number uses the pseudoinverse A⁺ instead: κ(A) = ‖A‖·‖A⁺‖. Computing A⁺ requires the singular value decomposition (SVD), which is beyond the scope of this direct-inversion tool. For rectangular systems, use a dedicated numerical library.
The Hilbert matrix H with entries H_ij = 1/(i+j−1) has eigenvalues that cluster exponentially near zero as n grows. A 5×5 Hilbert matrix has κ ≈ 4.8×10⁵; a 10×10 has κ ≈ 1.6×10¹³. This means that for n ≥ 12, the condition number exceeds double-precision capacity entirely, making the computed inverse numerically worthless despite the matrix being mathematically invertible.
Options include: (1) Regularization - add a small multiple of the identity matrix (Tikhonov regularization): solve (AᵀA + λI)x = Aᵀb instead of Ax = b. (2) Preconditioning - multiply by a matrix P such that P⁻¹A is better conditioned. (3) Reformulate the problem using QR decomposition or SVD, which are inherently more stable than direct inversion. (4) Increase precision using arbitrary-precision arithmetic libraries.