Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Sobolev Spaces as Banach Algebras

Big Idea

The spaces C0(Ω)C^0(\overline\Omega) and C(Ω)C^\infty(\overline\Omega) are algebras under pointwise multiplication: the product of two continuous (or smooth) functions is again continuous (or smooth). Sobolev spaces inherit this property precisely when they embed into C0C^0, which happens when s>d/2s > d/2. The algebra structure connects the embedding theory from the previous chapter to the bandedness of multiplication operators in spectral methods.

Why multiplication matters

So far we have used Sobolev spaces as linear function spaces: we add functions, take limits, and apply linear operators. But functions can also be multiplied, and a natural question arises: when is the pointwise product uvuv again in the same Sobolev space? The answer turns out to connect the embedding theory we just developed to the structure of multiplication operators in spectral methods.

The algebra property from embeddings

An algebra over R\mathbb{R} (or C\mathbb{C}) is a vector space AA that also carries a multiplication, a bilinear map A×AAA \times A \to A that is compatible with the linear structure. You already know several:

SpaceMultiplicationSubmultiplicative norm?
Rn×n\mathbb{R}^{n \times n}Matrix productABAB|AB| \leq |A||B| (operator norm)
C[0,1]C[0,1]Pointwise: (fg)(x)=f(x)g(x)(fg)(x) = f(x)g(x)fgfg|fg|_\infty \leq |f|_\infty |g|_\infty
L(Ω)L^\infty(\Omega)Pointwise a.e.fgfg|fg|_\infty \leq |f|_\infty |g|_\infty
1(Z)\ell^1(\mathbb{Z})Convolutionfg1f1g1|f * g|_1 \leq |f|_1 |g|_1 (Young)

A Banach algebra is a Banach space whose norm is submultiplicative: uvCuv\|uv\| \leq C\|u\|\|v\|. This single inequality says that the multiplication is continuous as a bilinear map, or equivalently, that the space is closed under multiplication with quantitative control.

Not every function space is an algebra. The product of two L2L^2 functions is generally only in L1L^1 (by Cauchy--Schwarz, fgL1fL2gL2\|fg\|_{L^1} \leq \|f\|_{L^2}\|g\|_{L^2}, but fgfg need not be in L2L^2). So L2L^2 with pointwise multiplication is not an algebra; multiplication takes you outside the space. The question for Sobolev spaces is: does the extra regularity fix this?

The key observation is that C0(Ω)C^0(\overline\Omega) is a Banach algebra (the sup norm is submultiplicative), so any Banach space that embeds continuously into C0C^0 inherits the algebra property. If XC0X \hookrightarrow C^0 with uCuX\|u\|_\infty \leq C\|u\|_X, then

uvXCuXvX\|uv\|_X \leq C'\|u\|_X \|v\|_X

whenever the product rule and embedding estimate can be combined. The Sobolev embedding theorem (Theorem 2) gives HsC0H^s \hookrightarrow C^0 exactly when s>d/2s > d/2. This is the threshold.

Theorem 1 (Sobolev multiplication (Banach algebra property))

Let ΩRd\Omega \subseteq \mathbb{R}^d be open with Lipschitz boundary (or Ω=Rd\Omega = \mathbb{R}^d). If s>d/2s > d/2, then Hs(Ω)H^s(\Omega) is a Banach algebra under pointwise multiplication: there exists a constant C>0C > 0 such that

uvHsCuHsvHsfor all u,vHs(Ω).\|uv\|_{H^s} \leq C \, \|u\|_{H^s} \, \|v\|_{H^s} \quad \text{for all } u, v \in H^s(\Omega).

In particular, Hs(Ω)H^s(\Omega) is closed under multiplication.

Proof 1

We prove the case s=ks = k a positive integer; the fractional case requires interpolation. The strategy is to apply the Leibniz rule (product rule for higher derivatives) and then split each term so that one factor is controlled in LL^\infty via the Sobolev embedding.

Step 1: Leibniz rule. For any multi-index αk|\alpha| \leq k, the general Leibniz rule gives

Dα(uv)=βα(αβ)DβuDαβv.D^\alpha(uv) = \sum_{\beta \leq \alpha} \binom{\alpha}{\beta} D^\beta u \cdot D^{\alpha - \beta} v.

This is the higher-dimensional product rule: it distributes the α\alpha derivatives between uu and vv in all possible ways. (For k=1k = 1 in one dimension, this is simply (uv)=uv+uv(uv)' = u'v + uv'.)

Step 2: Estimating each term. Each term in the Leibniz sum has the form DβuDαβvD^\beta u \cdot D^{\alpha - \beta} v where β+αβ=αk|\beta| + |\alpha - \beta| = |\alpha| \leq k. We estimate in L2L^2 using Hölder’s inequality with exponents 2 and \infty:

DβuDαβvL2DβuL2DαβvL.\|D^\beta u \cdot D^{\alpha - \beta} v\|_{L^2} \leq \|D^\beta u\|_{L^2} \, \|D^{\alpha-\beta} v\|_{L^\infty}.

Since αβk|\alpha - \beta| \leq k and k>d/2k > d/2, the function DαβvD^{\alpha - \beta} v has at least kαβk - |\alpha - \beta| remaining derivatives in L2L^2. When αβk|\alpha - \beta| \leq k, we can apply the Sobolev embedding to get

DαβvLCvHk.\|D^{\alpha-\beta} v\|_{L^\infty} \leq C \, \|v\|_{H^k}.

(More precisely, DαβvHkαβD^{\alpha - \beta} v \in H^{k - |\alpha - \beta|} and for the terms where kαβ>d/2k - |\alpha - \beta| > d/2 we embed directly into LL^\infty; for the remaining terms we swap the roles of uu and vv.)

Step 3: Combining. Summing over all αk|\alpha| \leq k gives

uvHk=αkDα(uv)L2CαkβαDβuL2vHkCuHkvHk.\|uv\|_{H^k} = \sum_{|\alpha| \leq k} \|D^\alpha(uv)\|_{L^2} \leq C \sum_{|\alpha| \leq k} \sum_{\beta \leq \alpha} \|D^\beta u\|_{L^2} \, \|v\|_{H^k} \leq C' \, \|u\|_{H^k} \, \|v\|_{H^k}.

Example 1 (H1H^1 in one dimension)

For d=1d = 1 and s=1s = 1, the condition s>d/2s > d/2 becomes 1>1/21 > 1/2, which holds. The proof is completely explicit:

uvH1=uvL2+(uv)L2.\|uv\|_{H^1} = \|uv\|_{L^2} + \|(uv)'\|_{L^2}.

For the first term: uvL2uLvL2\|uv\|_{L^2} \leq \|u\|_{L^\infty} \|v\|_{L^2}. For the second, the product rule gives (uv)=uv+uv(uv)' = u'v + uv', so

(uv)L2uL2vL+uLvL2.\|(uv)'\|_{L^2} \leq \|u'\|_{L^2} \|v\|_{L^\infty} + \|u\|_{L^\infty} \|v'\|_{L^2}.

By the Sobolev embedding H1(0,1)C[0,1]H^1(0,1) \hookrightarrow C[0,1], we have uLCuH1\|u\|_{L^\infty} \leq C\|u\|_{H^1} and similarly for vv. Combining:

uvH1CuH1vH1.\|uv\|_{H^1} \leq C \|u\|_{H^1} \|v\|_{H^1}.

Each term in the estimate pairs derivatives on one factor with LL^\infty control on the other. The Sobolev embedding converts the LL^\infty norms into H1H^1 norms, closing the estimate.

Remark 1 (Why s>d/2s > d/2 is sharp)

The threshold s>d/2s > d/2 cannot be improved. For s=d/2s = d/2, the space Hd/2(Rd)H^{d/2}(\mathbb{R}^d) does not embed into LL^\infty (functions in Hd/2H^{d/2} can have logarithmic singularities), and multiplication fails to be continuous. For instance, in d=2d = 2, the function u(x)=loglog(1/x)u(x) = \log\log(1/|x|) near the origin belongs to H1(Ω)H^1(\Omega) but u2u^2 does not.

Multiplication in frequency space: why regularity implies bandedness

The Banach algebra property tells us that multiplication by uHsu \in H^s is a bounded operator on HsH^s. But what does this operator look like when we expand in a spectral basis? The answer reveals a striking structural property: regularity forces approximate bandedness.

To see this most cleanly, work on the torus T=[0,2π)\mathbb{T} = [0, 2\pi) with Fourier basis {ek(x)=eikx}kZ\{e_k(x) = e^{ikx}\}_{k \in \mathbb{Z}}. Expand u=ku^keku = \sum_k \hat{u}_k e_k. The multiplication operator Mu:vuvM_u : v \mapsto uv has the matrix representation

(Mu)jk=Muek,ej=uek,ej=u^jk.(M_u)_{jk} = \langle M_u e_k, e_j \rangle = \langle u \cdot e_k, e_j \rangle = \hat{u}_{j-k}.

Observation 1 (Multiplication is convolution in frequency)

The matrix of MuM_u in the Fourier basis is a Toeplitz matrix: the (j,k)(j,k)-entry depends only on jkj - k, and equals the Fourier coefficient u^jk\hat{u}_{j-k}. Multiplying by uu in physical space is convolution by u^\hat{u} in frequency space.

Now the connection to regularity becomes immediate. If uHsu \in H^s, then

k(1+k2)su^k2<u^kks1/2 (by Cauchy–Schwarz).\sum_{k} (1 + k^2)^s |\hat{u}_k|^2 < \infty \quad \Longrightarrow \quad |\hat{u}_k| \lesssim |k|^{-s-1/2} \text{ (by Cauchy--Schwarz)}.

In particular, the off-diagonal entries of MuM_u decay:

(Mu)jk=u^jkjks1/2.|(M_u)_{jk}| = |\hat{u}_{j-k}| \lesssim |j - k|^{-s - 1/2}.

Proposition 1 (Off-diagonal decay of multiplication matrices)

Let uHs(T)u \in H^s(\mathbb{T}) with s0s \geq 0. The multiplication operator MuM_u in the Fourier basis satisfies

(Mu)jkCuHs(1+jk)s+1/2.|(M_u)_{jk}| \leq \frac{C \|u\|_{H^s}}{(1 + |j-k|)^{s+1/2}}.

In particular:

  • uH1u \in H^1: entries decay like jk3/2|j-k|^{-3/2}, so off-diagonal entries are summable.

  • uH2u \in H^2: decay like jk5/2|j-k|^{-5/2}, giving rapid off-diagonal decay.

  • uCu \in C^\infty: superalgebraic decay, meaning the matrix is numerically banded (entries drop below machine precision within a few diagonals).

  • uu analytic: exponential off-diagonal decay, making it essentially a banded matrix.

The following picture makes this concrete.

Source
<Figure size 1500x450 with 6 Axes>

The multiplication matrix in the Fourier basis for three functions of increasing regularity. Left: a step function (discontinuous) produces a full, slowly-decaying matrix. Center: a hat function (H1H^1) produces algebraic off-diagonal decay. Right: a Gaussian (analytic) produces exponential decay, so the matrix is effectively banded. The color scale is logarithmic.

The intuition: regularity is frequency localization

Why should smoothness produce banded multiplication matrices? The intuition comes from three equivalent ways to say the same thing:

  1. Regularity = frequency concentration. A function in HsH^s has Fourier coefficients decaying like ks1/2|k|^{-s-1/2}. Higher regularity means the function’s energy is concentrated at low frequencies.

  2. Multiplication = convolution in frequency. The matrix MuM_u is Toeplitz with entries u^jk\hat{u}_{j-k}. Applying MuM_u to a vector of Fourier coefficients is convolution with the sequence (u^k)(\hat{u}_k).

  3. Convolving with something narrow produces something narrow. If the sequence (u^k)(\hat{u}_k) is concentrated near k=0k = 0 (because uu is smooth), then convolution with it only couples mode kk to nearby modes k±Δkk \pm \Delta k, which is exactly the statement that MuM_u is approximately banded.

Putting these together: smooth functions act locally in frequency space. Multiplying by a smooth function is like a short-range interaction between Fourier modes. Multiplying by a rough function is a long-range interaction that couples all modes to all other modes.

Remark 2 (Analogy with integral operators)

This pattern is familiar from integral operators. A kernel K(x,y)K(x,y) produces a banded matrix in some basis when the kernel is “close to diagonal.” For multiplication, the kernel is K(x,y)=u(x)δ(xy)K(x,y) = u(x) \delta(x - y), which is as diagonal as possible in physical space. But expanding in the Fourier basis, this perfectly diagonal operator becomes the Toeplitz matrix u^jk\hat{u}_{j-k}, which is banded only when uu is smooth. The same operator can be diagonal in one basis and banded in another; the bandedness in frequency reflects the smoothness in space.

Connection to spectral and collocation methods

This structure is exactly what makes spectral methods efficient for smooth problems.

In a pseudospectral (collocation) method, we want to compute the Fourier (or Chebyshev) coefficients of a product uvuv. The naive approach is to form MuM_u and multiply, an O(N2)O(N^2) operation. But the collocation approach exploits a factorization:

Mu=F1diag(u)FM_u = F^{-1} \, \operatorname{diag}(\mathbf{u}) \, F

where FF is the DFT matrix and diag(u)\operatorname{diag}(\mathbf{u}) contains the values of uu at collocation points. Multiplication is diagonal in physical space, so we:

  1. Transform v^v\hat{v} \to v at collocation points (inverse FFT, O(NlogN)O(N \log N)),

  2. Multiply pointwise: wj=ujvjw_j = u_j v_j (O(N)O(N)),

  3. Transform back ww^w \to \hat{w} (forward FFT, O(NlogN)O(N \log N)).

The total cost is O(NlogN)O(N \log N) instead of O(N2)O(N^2). But here is the subtle point: this procedure is exact only for trigonometric polynomials of degree N\leq N. For general smooth functions, the product uvuv may have Fourier modes up to degree 2N2N, and the modes above NN get aliased back into the [N,N][-N, N] range.

The Banach algebra property tells us this aliasing is harmless for smooth functions: since u^kks1/2|\hat{u}_k| \lesssim |k|^{-s-1/2}, the aliased modes are exponentially small (for analytic uu) or at least rapidly decaying (for Sobolev uu). The error from aliasing is controlled by the off-diagonal decay of MuM_u, precisely the entries we truncate by working with a finite matrix.

Remark 3 (Chebyshev vs. Fourier)

For Chebyshev spectral methods, the multiplication matrix is not Toeplitz but almost-banded: the Chebyshev expansion u=kakTku = \sum_k a_k T_k gives a multiplication matrix with entries involving the linearization coefficients of Chebyshev products (TjTk=12(Tj+k+Tjk)T_j T_k = \frac{1}{2}(T_{j+k} + T_{|j-k|})), and the matrix has the same off-diagonal decay governed by the rate of decay of the Chebyshev coefficients aka_k. The story is identical: regularity of uu controls the coefficient decay, which controls the bandwidth. This is the algebraic reason why collocation methods produce banded or nearly-banded discrete operators for smooth problems.

Remark 4 (The Banach algebra property as a spectral guarantee)

From the numerical analyst’s perspective, the Banach algebra property of HsH^s is a stability guarantee for nonlinear spectral methods: if u,vHsu, v \in H^s and you form their product spectrally (with or without aliasing), the result stays in HsH^s with a controlled norm. This is essential for the convergence theory of spectral methods applied to nonlinear PDEs: it ensures that the nonlinearity does not create uncontrolled high-frequency content that would destabilize the computation.