Solving special matrices#
Solving diagonal matrices#
Let \(\boldsymbol{D}\) be a diagonal \(n \times n\) matrix i.e. the only non-zero elements are those on the main diagonal of the matrix.
Solution. Note that in this case each equation is “uncoupled” from the others and we can write
The cost of this algorithm is \(n\) floating point operations.
Upper triangular matrices (Backward Substitution)#
Let \(\boldsymbol{U}\) be an upper triangular \(n \times n\) matrix i.e. all elements below the main-diagonal are zero.
Note that in this case, the last equation (\aka the last row) is ``uncoupled’’ from the other rows. Thus we can write
Then consider the second to last equation (i.e. the second to last row). This row now reads
Note that since we already know \(x_n\) this is a single linear equation for a single unknown, hence we can solve for \(x_{n-1}\). This gives us
Next consider the next row up, once again we find a linear equation with a single unknown \(x_{n-2}\) for which we can solve. We can repeat this procedure until we arrive at the ``top most row’’, leading to the following algorithm.
What is the cost of this algorithm? We count the number of floating point operations (flops):
Thus solving an upper triangular \(n \times n\) matrix costs \(n^2\) floating point operations.
Lower triangular matrices (Forward Substitution)#
Consider the case when \(\boldsymbol{L}\) is lower triangular. Note that \(\boldsymbol{L} = \boldsymbol{U}^T\), so we could simply take the transpose and apply the algorithm for upper triangular matrices. However, since this is such a fundamental algorithm we present it in full detail.
Note how the pattern here is different compared to the upper triangular case. We essentially flip our strategy around. Starting with the first equation (i.e. first row) we can write
Then consider the second equation (\aka the second row). This row now reads
Note that since we already know \(x_1\) this is a single linear equation for a single unknown, hence we can solve for \(x_{2}\). This gives us
Next consider the next row down, once again we find a linear equation with a single unknown \(x_{3}\) for which we can solve. We can repeat this procedure until we arrive at the last row, leading to the following algorithm.
The cost of this algorithm is also \(n^2\), the same as the cost for solving an upper triangular matrix.