Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Vectors and Vector Spaces

The fundamental objects in numerical linear algebra are vectors—elements of a vector space.

The canonical example is Rn\mathbb{R}^n—column vectors with nn real components. But vector spaces are far more general:

Vector Norms

To analyze errors and convergence, we need to measure the “size” of vectors.

A vector space equipped with a norm is called a normed vector space. If it’s also complete (Cauchy sequences converge), it’s a Banach space—the natural setting for analysis.

The pp-Norms on Rn\mathbb{R}^n

xp=(i=1nxip)1/p\|\mathbf{x}\|_p = \left(\sum_{i=1}^n |x_i|^p\right)^{1/p}
NameFormulaInterpretation
1-normx1=ixi|\mathbf{x}|_1 = \sum_i \lvert x_i \rvertManhattan distance
2-normx2=ixi2|\mathbf{x}|_2 = \sqrt{\sum_i x_i^2}Euclidean length
\infty-normx=maxixi|\mathbf{x}|_\infty = \max_i \lvert x_i \rvertMaximum component

Norm Equivalence

This is a finite-dimensional phenomenon. In infinite dimensions (function spaces), different norms can give genuinely different notions of convergence—a key subtlety in PDE theory.

Function Space Norms

The same idea extends to functions:

SpaceNormFormula
C[a,b]C[a,b]Supremum normf=maxx[a,b]f(x)|f|_\infty = \max_{x \in [a,b]} \lvert f(x) \rvert
L2[a,b]L^2[a,b]L2L^2 normf2=abf(x)2dx|f|_2 = \sqrt{\int_a^b \lvert f(x) \rvert^2 dx}
Lp[a,b]L^p[a,b]LpL^p normfp=(abf(x)pdx)1/p|f|_p = \left(\int_a^b \lvert f(x) \rvert^p dx\right)^{1/p}

These are the continuous analogs of the discrete pp-norms—sums become integrals.

Matrices as Linear Maps

Matrices are linear functions between vector spaces. A matrix ARm×nA \in \mathbb{R}^{m \times n} defines:

TA:RnRm,TA(x)=AxT_A: \mathbb{R}^n \to \mathbb{R}^m, \qquad T_A(\mathbf{x}) = A\mathbf{x}

Linearity means:

Every linear map RnRm\mathbb{R}^n \to \mathbb{R}^m corresponds to a unique m×nm \times n matrix, and vice versa.

The Matrix-Vector Product

Given ARm×nA \in \mathbb{R}^{m \times n} and xRn\mathbf{x} \in \mathbb{R}^n:

(Ax)i=j=1naijxj,i=1,,m(A\mathbf{x})_i = \sum_{j=1}^{n} a_{ij} x_j, \quad i = 1, \ldots, m

Cost: 2mn2mn floating-point operations.

Two views:

Row ViewColumn View
Each (Ax)i(A\mathbf{x})_i is a dot product: aiTx\mathbf{a}_i^T \cdot \mathbf{x}AxA\mathbf{x} is a linear combination: jxja(j)\sum_j x_j \mathbf{a}^{(j)}

The column view reveals that AxA\mathbf{x} lives in the column space (range) of AA.

Geometric Interpretation

Matrix TypeGeometric Effect
DiagonalScaling along coordinate axes
Orthogonal (QTQ=IQ^TQ = I)Rotation and/or reflection
SymmetricScaling along eigenvector directions

Matrix Norms

Since matrices are linear maps, we measure their size by how much they “stretch” vectors.

This definition works for any linear map between normed spaces—it’s how we measure operators in functional analysis too.

NameFormulaComputation
1-normA1=maxjiaij|A|_1 = \max_j \sum_i \lvert a_{ij} \rvertMaximum column sum
\infty-normA=maxijaij|A|_\infty = \max_i \sum_j \lvert a_{ij} \rvertMaximum row sum
2-normA2=σmax(A)|A|_2 = \sigma_{\max}(A)Largest singular value

Key properties: