Exercises on Solving Simultaneous Linear Equations#

Exercise 1#

A) Solve the system of equations

\[\begin{split} \left[ \begin{array}{ccc} 4. & 2. & 1. \\ 9. & 3. & 1. \\ 25. & 5. & 1. \end{array} \right] x = \left[ \begin{array}{c} 0.693147 \\ 1.098612 \\ 1.609438 \end{array} \right] \end{split}\]

by naive Gaussian elimination. Do this by hand, rounding each intermediate result to four significant digits, and write down each intermediate version of the system of equations.

B) Compute the residual vector \(r := b - A x_a \) and residual maximum norm \(\| r \|_\text{max} = \| b - A x_a \|_\text{max}\) for your approximation. Residual calculations must be done to high precision, so I recommend that you do this part with Python in a notebook.

Exercise 2#

Repeat Exercise 1, except using maximal element partial pivoting. Then compare the residuals for the two methods (with and without pivoting), and comment.

Exercise 3#

A) Compute the Doolittle LU factorization of the matrix

\[\begin{split} A = \left[ \begin{array}{ccc} 4. & 2. & 1. \\ 9. & 3. & 1. \\ 25. & 5. & 1. \end{array} \right] \end{split}\]

as in the above exercises.

As above, do this by hand, rounding values to four significant digits, and describing the intermediate steps. (Feel free to then corroborate your hand working with Python code, but the results will not be exactly the same!)

B) Use this LU factorization to solve \(A x = b\) for \(x\) where

\[\begin{split} b = \left[ \begin{array}{c} 0.693147 \\ 1.098612 \\ 1.609438 \end{array} \right] \end{split}\]

as above.

Some Relevant Algorithms#

Row reduction#

The basic algorithm for row reduction is

for k from 1 to n-1: \(\qquad\) Step k: get zeros in column k below row k:
\(\quad\) for i from k+1 to n: \(\qquad\) update only the rows that change: from \(k+1\) on:
\(\qquad\) The multiple of row k to subtract from row i:
\(\quad\quad l_{i,k} = a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}\) \(\qquad\) If \(a_{k,k}^{(k-1)} \neq 0\)!
\(\qquad\) Subtract (\(l_{i,k}\) times row k) from row i in matrix A, in the columns that are not automaticaly zero:
\(\quad\quad\) for j from k+1 to n:
\(\quad\quad\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)} - l_{i,k} a_{k,j}^{(k-1)}\)
\(\quad\quad\) end for
\(\qquad\) and at right, subtract (\(l_{i,k}\) times \(b_k\)) from \(b_i\):
\(\quad\quad b_i^{(k)} = b_i^{(k-1)} - l_{i,k} b_{k}^{(k-1)}\)
\(\quad\) end for

Backward substitution with an upper triangular matrix#

The basic algorithm for backward substitution is

\(x_n = c_n/u_{n,n}\)
for i from n-1 down to 1
\(\displaystyle \quad x_i = \frac{c_i - \sum_{j=i+1}^{n} u_{i,j} x_j}{u_{i,i}}\)
end for


This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International