2.1. Row Reduction (Gaussian Elimination)#
Last Revised September 16, 2024
References:
Section 2.1.1 Naive Gaussian elimination of [Sauer, 2022].
Section 6.1 Linear Systems of Equations of [Burden et al., 2016].
Section 7.1 of [Chenney and Kincaid, 2012].
2.1.1. Introduction#
This section introduces a basic algorithm for solving a system of \(n\) linear equations in \(n\) unknowns, based on the strategy of row-reduction or Gaussian elimination. You might have seen this strategy in a linear algebra course, but if not, do not worry: this chapter introduces everything from the beginning, not assuming any previous knowledge of linear algebra beyond matrix notation.
This section also takes the opportunity to introduce some general tactics for getting from mathematical facts to precise algorithms and then code that use them.
(Module numpy.linalg with standard nickname la)
As mentioned in the introduction, Python package NumPy provides a lot of useful tools for numerical linear algebra through a module numpy.linalg
.
Thus we will often start a section or Jupyter notebook that uses linear algebra with the following import commands,
using standard nicknames np
and la
:
import numpy as np
import numpy.linalg as la
# Import some items individually, so they can be used by "first name only".
from numpy import array, inf, zeros_like, empty
2.1.2. Strategy for getting from mathematical facts to a good algorithm and then to its implentation in [Python] code#
Here I take the opportunity to illustrate some useful strategies for getting from mathematical facts and ideas to good algorithms and working code for solving a numerical problem. The pattern we will see here, and often later, is:
Step 1. Get a basic algorithm:#
Start with mathematical facts (like the equations \(\sum_{j=1}^n a_{ij}x_j = b_i\)).
Solve to get an equation for each unknown — or for an updated aproximation of each unknown — in terms of other quantitities.
Specify an order of evaluation in which all the quantities at right are evaluated earlier.
In this, it is often best to start with a verbal description before specifying the details in more precise and detailed mathematical form.
Step 2. Refine to get a more robust algorithm:#
Identify cases that can lead to failure due to division by zero and such, and revise to avoid them.
Avoid inaccuracy due to problems like severe rounding error. One rule of thumb is that anywhere that a zero value is a fatal flaw (in particular, division by zero), a very small value is also a hazard when rounding error is present. So avoid very small denominators. (The section on Machine numbers, rounding error and Error Propagation examines this through the phenomenon of loss of significance.)
Step 3. Refine to get a more efficient algorithm#
For example,
Avoid repeated evaluation of exactly the same quantity.
Avoid redundant calculations, such as ones whose value can be determined in advance; for example, values that can be shown in advance to be zero.
Compare and choose between alternative algorithms.
2.1.3. Row Reduction, a.k.a. Gaussian Elimination#
We start by considering the most basic algorithm, based on ideas seen in a linear algebra course.
The problem is best stated as a collection of equations for individual numerical values:
Given coefficients \(a_{i,j},\ 1 \leq i \leq n,\, 1 \leq j \leq n\) and right-hand side values \(b_i,\ 1 \leq i \leq n\), solve for the \(n\) unknowns \(x_j,\ 1 \leq j \leq n\) in the equations
In verbal form, the basic strategy of row reduction or row reduction is this:
Choose one equation and use it to eliminate one chosen unknown from all the other equations, leaving that chosen equation plus \(n-1\) equations in \(n-1\) unknowns.
Repeat recursively, at each stage using one of the remaining equations to eliminate one of the remaining unknowns from all the other equations.
This gives a final equation in just one unknown, preceeded by an equation in that unknown plus one other, and so on: solve them in this order, from last to first.
Determining those choices, to produce a first algorithm: “Naive row reduction”#
A precise algorithm must include rules specifying all the choices indicated above. The simplest “naive” choice, which works in most but not all cases, is to eliminate from the top to bottom and left to right:
Use the first equation to eliminate the first unknown from all other equations.
Repeat recursively, at each stage using the first remaining equation to eliminate the first remaining unknown. Thus, at step \(k\), equation \(k\) is used to eliminate unknown \(x_k\).
This gives one equation in just the last unknown \(x_n\); another equation in the last two unknowns \(x_{n-1}\) and \(x_n\), and so on: solve them in this reverse order, evaluating the unknowns from last to first.
This usually works, but can fail because at some stage the (updated) \(k\)-th equation might not include the \(k\)-th unknown: that is, its coefficient might be zero, leading to division by zero.
We will refine the algorithm to deal with that in the section on Partial Pivoting.
(Using Numpy for matrices, vectors and their products)
As of version 3.5 of Python, vectors, matrices, and their products can be handled very elegantly using Numpy arrays, with the one quirk that the product is denoted by the at-sign @
.
That is, for a matrix \(A\) and compatible matrix or vector \(b\) both stored in Numpy arrays, their product is given by A @ b
.
This means that, along with my encouragement to totally ignore Python arrays in favor of Numpy arrays, and to usually avoid Python lists when working with numerical data, I also recommend that you ignore the now obsolescent Numpy matrix
data type, if you happen to come across it in older material on Numpy.
Aside: Why not A * b
? Because that is the more general “point-wise” array product:
c = A * b
gives array c
with c[i,j]
equal to A[i,j] * b[i,j]
, which is not how matrix multiplication works.
2.1.4. The general case of solving \(Ax = b\), using Python and NumPy#
The problem of solving \(Ax = b\) in general, when all you know is that \(A\) is an \(n \times n\) matrix and \(b\) is an \(n\)-vector, can in most cases be handled well by using standard software rather than by writing your own code. Here is an example in Python, solving
using the array
type from package numpy
and the function solve
from the linear algebra module numpy.linalg
.
A = array([[4., 2., 7.], [3., 5., -6.],[1., -3., 2.]])
print(f"A =\n{A}")
b = array([2., 3., 4.])
print(f"b = {b}")
print(f"A @ b = {A @ b}")
A =
[[ 4. 2. 7.]
[ 3. 5. -6.]
[ 1. -3. 2.]]
b = [2. 3. 4.]
A @ b = [42. -3. 1.]
(floating point numbers versus integers)
It is important to specify that the entries are real numbers (type “float”); otherwise Numpy does integer arithmetic.
One way to do this is as above: putting a decimal point in the numbers (or to be lazy, in at least one of them!)
Another is to tell the function array
that the type is float:
A = array([[4, 2, 7], [3, 5, -6],[1, -3, 2]], dtype=float)
b = array([2, 3, 4], dtype=float)
x = la.solve(A, b)
print("numpy.linalg.solve says that the solution of Ax = b is")
print(f"x = {x}")
# Check the backward error, also known as the residual
r = b - A @ x
print(f"\nAs a check, the residual (or backward error) is")
print(f" r = b-Ax = {r},")
print(f"and its infinity (or 'maximum') norm is ||r|| = {la.norm(r, inf)}")
print("\nAside: another way to compute this is with max(abs(r)):")
print(f"||r|| = {max(abs(r))}")
print(f"and its 1-norm is ||r|| = {la.norm(r, 1)}")
numpy.linalg.solve says that the solution of Ax = b is
x = [ 1.81168831 -1.03246753 -0.45454545]
As a check, the residual (or backward error) is
r = b-Ax = [0.0000000e+00 0.0000000e+00 8.8817842e-16],
and its infinity (or 'maximum') norm is ||r|| = 8.881784197001252e-16
Aside: another way to compute this is with max(abs(r)):
||r|| = 8.881784197001252e-16
and its 1-norm is ||r|| = 8.881784197001252e-16
(Not quite zero values and rounding)
Some values here that you might hope to be zero are instead very small non-zero numbers, with exponent \(10^{-16}\), due to rounding error in computer arithmetic. For details on this (like why “-16” in particular) see section Machine Numbers, Rounding Error and Error Propagation.
2.1.5. The naive row reduction algorithm, in pseudo-code#
Here the elements of the transformed matrix and vector after step \(k\) are named \(a_{i,j}^{(k)}\) and \(b_{k}^{(k)}\), so that the original values are \(a_{i,j}^{(0)} = a_{i,j}\) and \(b_{i}^{(0)} = b_{i}\).
The name \(l_{i,k}\) is given to the multiple of row \(k\) that is subtracted from row \(i\) at step \(k\). This naming might seem redundant, but it becomes very useful later, in the section on LU factorization.
(naive row reduction)
for k from 1 to n-1 \(\qquad\) Step k: get zeros in column k below row k:
\(\quad\) for i from k+1 to n
\(\qquad\) Evaluate the multiple of row k to subtract from row i:
\(\quad\quad l_{i,k} = a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}\) \(\qquad\) If \(a_{k,k}^{(k-1)} \neq 0\)!
\(\qquad\) Subtract \((l_{i,k}\) times row k) from row i in matrix A …:
\(\quad\quad\) for j from 1 to n
\(\quad\quad\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)} - l_{i,k} a_{k,j}^{(k-1)}\)
\(\quad\quad\) end
\(\qquad\) … and at right, subtract \((l_{i,k}\) times \(b_k)\) from \(b_i\):
\(\quad\quad b_i^{(k)} = b_i^{(k-1)} - l_{i,k} b_{k}^{(k-1)}\)
\(\quad\) end
The rows before \(i=k\) are unchanged, so they are ommited from the update;
however, in a situation where we need to complete the definitions of \(A^{(k)}\) and \(b^{(k)}\) we would also need the following inside the for k
loop:
(Inserting the zeros below the main diagonal)
\(\quad\) for i from 1 to k
\(\quad\quad\) for j from 1 to n
\(\quad\quad\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)}\)
\(\quad\quad\) end
\(\quad\quad b_i^{(k)} = b_i^{(k-1)}\)
\(\quad\) end
However, the algorithm will usually be implemented by overwriting the previous values in an array with new ones, and then this part is redundant.
The next improvement in efficiency: the updates in the first \(k\) columns at step \(k\) give zero values (that is the key idea of the algorithm!), so there is no need to compute or store those zeros, and thus the only calculations needed in the above for j from 1 to n
loop are covered by for j from k+1 to n
.
Thus from now on we use only the latter: except when, for demonstration purposes, we need those zeros.
Thus, the standard algorithm looks like this:
(basic row reduction)
for k from 1 to n-1 \(\qquad\) Step k: Get zeros in column k below row k:
\(\quad\) for i from k+1 to n \(\qquad\) Update only the rows that change: from k+1 on:
\(\qquad\) Evaluate the multiple of row k to subtract from row i:
\(\quad\quad l_{i,k} = a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}\) \(\qquad\) If \(a_{k,k}^{(k-1)} \neq 0\)!
\(\qquad\) Subtract \((l_{i,k}\) times row k) from row i in matrix A, in the columns that are not automaticaly zero:
\(\quad\quad\) for j from k+1 to n
\(\quad\quad\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)} - l_{i,k} a_{k,j}^{(k-1)}\)
\(\quad\quad\) end
\(\qquad\) and at right, subtract \((l_{i,k}\) times \(b_k)\) from \(b_i\):
\(\quad\quad b_i^{(k)} = b_i^{(k-1)} - l_{i,k} b_{k}^{(k-1)}\)
\(\quad\) end
$end
(Syntax for for loops and 0-based array indexing)
Since array indices in Python (and in Java, C, C++, C#, Swift, etc.) start from zero, not from one, it will be convenient to express linear algebra algorithms in a form compatible with this.
Every index is one less than in the above! Thus in an array with \(n\) elements, the index values \(i\) are \(0 \leq i < n\), excluding n, which is the half-open interval of integers \([0, n)\).
In the indexing of an array, one can refer to the part the array with indices \(a \leq i < b\), excluding b, with the slice notation
a:b
.Similarly, when specifiying the range of consecutive integers \(i\), \(a \leq i < b\) in a
for
loop, one can use the expressionrange(a,b)
.
Also, when indices are processed in order (from low to high), these notes will abuse notation slightly, refering to the values as a set — specifically, a semi-open interval of integers.
For example, the above loop
for j from k+1 to n:
first gets all indices lowered by one, to
for j from k to n-1:
and then this will sometimes be described in terms of the set of j
values:
for j in [k,n):
which in Python becomes
for j in range(k, n):
This new notation needs care initially, but helps with clarity in the long run.
For one thing, it means that the indices of an \(n\)-element array, \([0,n-1)\), are described by range(0,n)
and by 0:n
.
In fact, the case of “starting at the beginning”, with index zero, can be abbreviated: range(n)
is the same as range(0,n)
, and :b
ia the same as 0:b
.
Another advantage is that the index ranges a:b
and b:c
together cover the same indices as a:c
, with no gap or duplication of b
, and likewise range(a,b)
and range(b,c)
combine to cover range(a,c)
.
2.1.6. The naive row reduction algorithm, in Pythonic zero-based pseudo-code#
Here the above notational shift is made, along with eliminating the above-noted redundant formulas for values that are either zero or are unchanged from the previous step. It is also convenient for \(k\) to be the index of the row being used to reduce subsequent rows, and so also the index of the column in which values below the main diagonal are being set to zero.
for k in [0, n-1):
\(\quad\) for i in [k+1, n):
\(\quad\quad l_{i,k} = a_{i,k}^{(k)}/a_{k,k}^{(k)}\qquad\) If \(a_{k,k}^{(k)} \neq 0\)!
\(\quad\quad\) for j in [k+1, n):
\(\quad\quad\quad a_{i,j}^{(k+1)} = a_{i,j}^{(k)} - l_{i,k} a_{k,j}^{(k)}\)
\(\quad\quad\) end
\(\quad\quad b_i^{(k+1)} = b_i^{(k)} - l_{i,k} b_{k}^{(k)}\)
\(\quad\) end
end
2.1.7. The naive row reduction algorithm, in Python#
Conversion to actual Python code is now quite straightforward; there is litle more to be done than:
Change the way that indices are described, from \(b_i\) to
b[i]
and from \(a_{i,j}\) toA[i,j]
.Use case consistently in array names, since the quirk in mathematical notation of using upper-case letters for matrix names but lower case letters for their elements is gone! In these notes, matrix names will be upper-case and vector names will be lower-case (even when a vector is considered as 1-column matrix).
Rather than create a new array for each matrix \(A^{(0)}\), \(A^{(0)}\), etc. and each vector \(b^{(0)}\), \(b^{(1)}\), we overwite each in the same array.
We will see that this simplicity in translation is quite common once algorithms have been expressed with zero-based indexing. The main ugliness is with loops that count backwards; see below.
for k in range(n-1):
for i in range(k+1, n):
L[i,k] = A[i,k] / A[k,k]
for j in range(k+1, n):
A[i,j] -= L[i,k] * A[k,j]
b[i] -= L[i,k] * b[k]
To demonstrate this, some additions are needed:
Putting this algorithm into a function.
Getting the value \(n\) needed for the loop, using the fact that it is the length of vector
b
.Creating the array \(L\).
Copying the input arrays
A
andb
into new ones,U
andc
, so that the original arrays are not changed. That is, when the row reduction is completed,U
contains \(A^{(n-1)}\) andc
contains \(b^{(n-1)}\).
Also, for some demonstrations, the zero values below the main diagonal of U
are inserted, though usually they would not be needed.
def rowReduce(A, b, demoMode=False):
"""To avoid modifying the matrix and vector specified as input,
they are copied to new arrays, with the method .copy()
Warning: it does not work to say "U = A" and "c = b";
this makes these names synonyms, referring to the same stored data.
"""
U = A.copy()
c = b.copy()
n = len(b)
# The function zeros_like() is used to create L with the same size and shape as A,
# and with all its elements zero initially.
L = np.zeros_like(A)
for k in range(n-1):
for i in range(k+1, n):
# compute all the L values for column k:
L[i,k] = U[i,k] / U[k,k] # Beware the case where U[k,k] is 0
for j in range(k+1, n):
U[i,j] -= L[i,k] * U[k,j]
# Insert the below-diagonal zeros in column k;
# this is not important for calculations, since those elements of U are not used in backward substitution,
# but it helps for displaying results and for checking the results via residuals.
U[i,k] = 0.
c[i] -= L[i,k] * c[k]
return (U, c)
Note: As usual, you could omit the above def
and instead import this function with
though that actually gives the slightly revised version seen below.
(U, c) = rowReduce(A, b)
print(f"Row reduction gives\nU =\n{U}")
print(f"c = {c}")
Row reduction gives
U =
[[ 4. 2. 7. ]
[ 0. 3.5 -11.25]
[ 0. 0. -11. ]]
c = [2. 1.5 5. ]
Let’s take advantage of the fact that we have used la.solve
to get a very accurate approximation of the solution x of \(Ax=b\);
this should also solve \(Ux=c\), so check the backward error, a.k.a. the residual:
r_Ab = b - A@x
print(f"\nThe residual (backward error), b-Ax, is {r_Ab}, with maximum norm {max(abs(r_Ab))}.")
r_Uc = c - U@x
print(f"\nThe residual for the row reduced form, c-Ux, is {r_Uc}, with maximum norm {max(abs(r_Uc))}.")
The residual (backward error), b-Ax, is [0.0000000e+00 0.0000000e+00 8.8817842e-16], with maximum norm 8.881784197001252e-16.
The residual for the row reduced form, c-Ux, is [ 0.00000000e+00 -2.22044605e-16 0.00000000e+00], with maximum norm 2.220446049250313e-16.
(Array slicing in Python)
Operations on a sequence of array indices, with “slicing”: vectorization
Python code can specify vector operations on a range of indices \([c,d)\), referred to withthe slice notaiton c:d
.
For example, the slice notation A[c:d,j]
refers to the array containing the \(d-c\) elements A[i,j]
for \(i\) in the semi-open interval \([c,d)\).
Thus, each of the three arithmetic calculations above can be specified over a range of index values in a single command, eliminating all the inner-most for
loops;
this is somtimes called vectorization.
Only for
loops that contains other for
loops remain.
Apart from mathematical elegance, this usually allows far faster execution.
for k in range(n-1):
L[k+1:n,k] = U[k+1:n,k] / U[k,k] # compute all the L values for column k
for i in range(k+1, n):
U[i,k+1:n] -= L[i,k] * U[k,k+1:n] # Update row i
c[k+1:n] -= L[k+1:n,k] * c[k] # update c values
I will break my usual guideline by redefining rowReduce
, since this is just a different statement of the same algorithm
(though with some added “demoMode” output).
def rowReduce(A, b, demomode=False):
"""To avoid modifying the matrix and vector specified as input,
they are copied to new arrays, with the method .copy()
Warning: it does not work to say "U = A" and "c = b";
this makes these names synonyms, referring to the same stored data.
"""
U = A.copy()
c = b.copy()
n = len(b)
# The function zeros_like() is used to create L with the same size and shape as A, and with all its elements zero initially.
L = np.zeros_like(A)
for k in range(n-1):
if demomode: print(f"Step {k=}")
# compute all the L values for column k:
L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0
if demomode:
print(f"The multipliers in column {k+1} are {L[k+1:,k]}")
for i in range(k+1, n):
U[i,k+1:n] -= L[i,k] * U[k,k+1:n] # Update row i
c[k+1:n] -= L[k+1:n,k] * c[k] # update c values
if demomode:
# Insert the below-diagonal zeros in column k;
# this is not important for calculations, since those elements of U are not used in backward substitution,
# but it helps for displaying results and for checking the results via residuals.
U[k+1:, k] = 0.
print(f"The updated matrix is\n{U}")
print(f"The updated right-hand side is\n{c}")
return (U, c)
(another way to select some rows or columns of a matrix)
As a variant on slicing, one can give a list of indices to select rows or columns a of a matrix; for example:
A[[r1 r2 r3], :]
gives a three row part of array A
and
A[2:, [c1 c2 c3 c4]]
selects the indicated four columns — but only from row 2 onwards.
This gives another way to describe the update of the lower-right block U[k+1:n,k+1:n]
with a single matrix multiplication:
it is the outer product of part of column k
of L
after row k
by the part of row k
of U
after column k
.
To specify that the piecws of L
nd U
are identifies as a 1-column matrix and a 1-row matrix respectively, rather than as vectors,
the above “row/column list” method must be used, with the list being just [k]
in each case.
def rowReduce(A, b, demomode=False):
"""To avoid modifying the matrix and vector specified as input,
they are copied to new arrays, with the method .copy()
Warning: it does not work to say "U = A" and "c = b";
this makes these names synonyms, referring to the same stored data.
"""
U = A.copy()
c = b.copy()
n = len(b)
# The function zeros_like() is used to create L with the same size and shape as A, and with all its elements zero initially.
L = np.zeros_like(A)
for k in range(n-1):
if demomode: print(f"Step {k=}")
# compute all the L values for column k:
L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0
if demomode:
print(f"The multipliers in column {k+1} are {L[k+1:,k]}")
U[k+1:n,k+1:n] -= L[k+1:n,[k]] @ U[[k],k+1:n] # The new "outer product" method.
# Insert the below-diagonal zeros in column k;
# this is not important for calculations, since those elements of U are not used in backward substitution,
# but it helps for displaying results and for checking the results via residuals.
U[k+1:n,k] = 0.0
c[k+1:n] -= L[k+1:n,k] * c[k] # update c values
if demomode:
U[k+1:n, k] = 0. # insert zeros in column k of U:
print(f"The updated matrix is\n{U}")
print(f"The updated right-hand side is\n{c}")
return (U, c)
Repeating the above testing:
(U, c) = rowReduce(A, b, demomode=True) print() print(f”U =\n{U}”) print(f”c = {c}”)
2.1.8. Backward substitution with an upper triangular matrix#
The transformed equations have the form
and can be solved from bottom up, starting with \(x_n = c_n/u_{n,n}\).
All but the last equation can be written as
and so solved as
This procedure is backward substitution, giving the algorithm
\(x_n = c_n/u_{n,n}\)
for i from n-1 down to 1
\(\displaystyle \quad x_i = \frac{c_i - \sum_{j=i+1}^{n} u_{i,j} x_j}{u_{i,i}}\)
end
This works so long as none of the main diagonal terms \(u_{i,i}\) is zero, because when done in this order, everything on the right hand side is known by the time it is evaluated.
For future reference, note that the elements \(u_{k,k}\) that must be non-zero here, the ones on the main diagonal of \(U\), are the same as the elements \(a_{k,k}^{(k)}\) that must be non-zero in the row reduction stage above, because after stage \(k\), the elements of row \(k\) do not change any more: \(a_{k,k}^{(k)} = a_{k,k}^{(n-1)} = u_{k,k}\).
2.1.9. The backward substitution algorithm in zero-based pseudo-code#
Again, a zero-based version is more convenient for programming in Python (or Java, or C++):
\(x_{n-1} = c_{n-1}/u_{n-1,n-1}\)
for i from n-2 down to 0
\(\displaystyle \quad x_i = \frac{c_i - \sum_{j=i+1}^{n-1} u_{i,j} x_j}{u_{i,i}}\)
end
(Indexing from the end of an array and counting backwards)
To express the above backwards counting in Python, we have to deal with the fact that range(a,b)
counts upwards and excludes the “end value” b
.
The first part is easy: the extended form range(a, b, step)
increments by step
instead of by one, so that range(a, b, 1)
is the same as range(a,b)
, and range(a, b, -1)
counts down: \(a, a-1, \dots, b+1\).
But it still stops just before \(b\), so getting the values from \(n-1\) down to \(0\) requires using \(b= -1\), and so the slightly quirky expression range(n-1, -1, -1)
.
One more bit of Python: for an \(n\)-element single-index array v
, the sum of its elements \(\sum_{i=0}^{n-1} v_i\) is given by sum(v)
.
Thus \(\sum_{i=a}^{b-1} v_i\), the sum over a subset of indices \([a,b)\), is given by sum(v[a:b])
.
And remember that multiplication of Numpy arrays with *
is pointwise.
The backward substitution algorithm in Python#
With all the above Python details, the core code for backward substitution is:
x[n-1] = c[n-1]/U[n-1,n-1]
for i in range(n-2, -1, -1):
x[i] = (c[i] - sum(U[i,i+1:] * x[i+1:])) / U[i,i]
Note that the backward substitution algorithm and its Python coding have a nice mathematical advantage over the row reduction algorithm above: the precise mathematical statement of the algorithm does not need any intermediate quantities distinguished by superscripts \({}^{(k)}\), and correspondingly, all variables in the code have fixed meanings, rather than changing at each step.
In other words, all uses of the equal sign are mathematically correct as equations!
This can be advantageous in creating algorithms and code that is more understandable and more readily verified to be correct, and is an aspect of the functional programming approach. We will soon go part way to that functional ideal, by rephrasing row reduction in a form where all variables have clear, fixed meanings, corresponding to the natural mathematical description of the process: the method of LU factorization introduced in Solving Ax = b with LU factorization, A = L U.
(Another way to count backwards along an array)
On the other hand, there is an elegant way access array elements “from the top down”.
Firstly (or “lastly”) x[-1]
is the last element: the same as x[n-1]
when n = len(x)
, but without needing to know that length \(n\).
More generally, x[-i]
is x[n-i]
.
Thus, one possibly more elegant way to describe backward substitution is to count with an increasing index, the “distance from the bottom”:
from x[n-1]
which is x[-1]
to x[0]
, which is x[-n]
.
That is, index -i
replaces index \(n - i\):
x[-1] = c[-1]/U[-1,-1]
for i in range(2, n+1):
x[-i] = (c[-i] - sum(U[-i,1-i:] * x[1-i:])) / U[-i,-i]
There is still the quirk of having to “overshoot”, referring to n+1
in range
to get to final index -n
.
As a final demonstration, we put this second version of the code into a complete working Python function and test it:
def backwardSubstitution(U, c, demomode=False):
"""Solve U x = c for b."""
n = len(c)
x = np.zeros(n)
x[-1] = c[-1]/U[-1,-1]
if demomode: print(f"x_{n} = {x[-1]}")
for i in range(2, n+1):
x[-i] = (c[-i] - sum(U[-i,1-i:] * x[1-i:])) / U[-i,-i]
if demomode: print(f"x_{n-i+1} = {x[-i]}")
return x
which as usual is also available via
from numericalMethods import backwardSubstitution
x = backwardSubstitution(U, c)
print(f"x = {x}")
r = b - A@x
print(f"\nThe residual b - Ax = {r},")
print(f"with maximum norm {max(abs(r)):.3}.")
x = [ 1.81168831 -1.03246753 -0.45454545]
The residual b - Ax = [0.0000000e+00 0.0000000e+00 8.8817842e-16],
with maximum norm 8.88e-16.
Since one is often just interested in the solution given by the two steps of row reduction and then backward substitution, they can be combined in a single function by composition:
def solveLinearSystem(A, b): return backwardSubstitution(*rowReduce(A, b));
(On Python)
The *
here takes the value to its right (a single tuple with two elements U
and c
)
and “unpacks” it to the two separate variables U
and c
needed as input to backwardSubstitution
solveLinearSystem(A, b)
array([ 1.81168831, -1.03246753, -0.45454545])
2.1.10. Two code testing hacks: starting from a known solution, and using randomly generated examples#
An often useful strategy in developing and testing code is to create a test case with a known solution; another is to use random numbers to avoid accidently using a test case that in unusually easy.
Prefered Python style is to have all import
statements at the top,
but since this is the first time we’ve heard of module random
,
I did not want it to be mentioned mysteriously above.
import random
x_random = empty(len(b)) # An array the same length as b, with no values specified yet
for i in range(len(x)):
x_random[i] = random.uniform(-1, 1) # gives random real value, from uniform distribution in [-1, 1]
print(f"x_random = {x_random}")
x_random = [-0.27352765 0.31619792 -0.02981991]
Create a right-hand side b that automatically makes x_random
the correct solution:
b_random = A @ x_random
print(f"A =\n{A}")
print(f"\nb_random = {b_random}")
(U, c_random) = rowReduce(A, b_random)
print(f"\nU=\n{U}")
print(f"\nResidual c_random - U@x_random = {c_random - U@x_random}")
x_computed = backwardSubstitution(U, c_random)
print(f"\nx_computed = {x_computed}")
print(f"\nResidual b_random - A@x_computed = {b_random - A@x_computed}")
print(f"\nBackward error |b_random - A@x_computed| = {max(abs(b_random - A@x_computed))}")
print(f"\nError x_random - x_computed = {x_random - x_computed}")
print(f"\nAbsolute error |x_random - x_computed| = {max(abs(x_random - x_computed))}")
A =
[[ 4. 2. 7.]
[ 3. 5. -6.]
[ 1. -3. 2.]]
b_random = [-0.67045415 0.9393261 -1.28176123]
U=
[[ 4. 2. 7. ]
[ 0. 3.5 -11.25]
[ 0. 0. -11. ]]
Residual c_random - U@x_random = [0. 0. 0.]
x_computed = [-0.27352765 0.31619792 -0.02981991]
Residual b_random - A@x_computed = [0. 0. 0.]
Backward error |b_random - A@x_computed| = 0.0
Error x_random - x_computed = [0. 0. 0.]
Absolute error |x_random - x_computed| = 0.0
2.1.11. What can go wrong? Three examples#
(An obvious division by zero problem)
Consider the system of two equations
It is easy to see that this has the solution \(x_1 = x_2 = 1\); in fact it is already in “reduced form”. However when put into matrix form
the above algorithm fails, because the fist pivot element \(a_{11}\) is zero:
A1 = array([[0., 1.], [1. , 1.]])
b1 = array([1., 1.])
(U1, c1) = rowReduce(A1, b1)
print(f"U1 = \n{U1}")
print(f"c1 = {c1}")
x1 = backwardSubstitution(U1, c1)
print(f"x1 = {x1}")
U1 =
[[ 0. 1.]
[ 0. -inf]]
c1 = [ 1. -inf]
x1 = [nan nan]
/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/2412833099.py:15: RuntimeWarning: divide by zero encountered in divide
L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0
/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/1659329062.py:5: RuntimeWarning: invalid value encountered in scalar divide
x[-1] = c[-1]/U[-1,-1]
(On Python “Infinity” and “Not a Number”)
inf
, meaning “infinity”, is a special value given as the result of operations like division by zero. Surprisingly, it can have a sign! (This is available in Python from package Numpy asnumpy.inf
)nan
, meaning “not a number”, is a special value given as the result of calculations like0/0
. (This is available in Python from package Numpy asnumpy.nan
)
(A less obvious division by zero problem)
Next consider this system
The solution is \(x_1 = x_2 = x_3 = 1\), and this time none of the diagonal elements is zero, so it is not so obvious that a division by zero problem will occur, but:
A2 = array([[1., 1., 1.], [1., 1., 2.],[1., 2., 2.]])
b2 = array([3., 4., 5.])
(U2, c2) = rowReduce(A2, b2)
print(f"U2 = \n{U2}")
print(f"c2 = {c2}")
x2 = backwardSubstitution(U2, c2)
print(f"x2 = {x2}")
U2 =
[[ 1. 1. 1.]
[ 0. 0. 1.]
[ 0. 0. -inf]]
c2 = [ 3. 1. -inf]
x2 = [nan nan nan]
/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/2412833099.py:15: RuntimeWarning: divide by zero encountered in divide
L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0
/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/1659329062.py:5: RuntimeWarning: invalid value encountered in scalar divide
x[-1] = c[-1]/U[-1,-1]
What happens here is that the first stage subtracts the first row from each of the others …
A2[1,:] -= A2[0,:]
b2[1] -= b2[0]
A2[2,:] -= A2[0,:]
b2[2] -= b2[0]
… and the new matrix has the same problem as above at the next stage:
print(f"Now A2 is \n{A2}")
print(f"and b2 is {b2}")
Now A2 is
[[1. 1. 1.]
[0. 0. 1.]
[0. 1. 1.]]
and b2 is [3. 1. 2.]
Thus, the second and third equations are
with the same problem as in Example 2.1.
(Problems caused by inexact arithmetic: “divison by almost zero”)
The equations
again have the solution \(x_1 = x_2 = 1\), and the only division that happens in the above algorithm for row reduction is by that pivot element \(a_{11} = 1, \neq 0\), so with exact arithmetic, all would be well. But:
A3 = array([[1., 1e16], [1. , 1.]])
b3 = array([1. + 1e16, 2.])
print(f"A3 = \n{A3}")
print(f"b3 = {b3}")
A3 =
[[1.e+00 1.e+16]
[1.e+00 1.e+00]]
b3 = [1.e+16 2.e+00]
(U3, c3) = rowReduce(A3, b3)
print(f"U3 = \n{U3}")
print(f"c3 = {c3}")
x3 = backwardSubstitution(U3, c3)
print(f"x3 = {x3}")
U3 =
[[ 1.e+00 1.e+16]
[ 0.e+00 -1.e+16]]
c3 = [ 1.e+16 -1.e+16]
x3 = [2. 1.]
This gets \(x_2 = 1\) correct, but \(x_1\) is completely wrong!
One hint is that \(b_1\), which should be \(1 + 10^{16} = 1000000000000001\), is instead just given as \(10^{16}\).
On the other hand, all is well with less large values, like \(10^{15}\):
A3a = array([[1., 1e15], [1. , 1.]])
b3a = array([1. + 1e15, 2.])
print(f"A3a = \n{A3a}")
print(f"b3a = {b3a}")
A3a =
[[1.e+00 1.e+15]
[1.e+00 1.e+00]]
b3a = [1.e+15 2.e+00]
(U3a, c3a) = rowReduce(A3a, b3a)
print(f"U3a = \n{U3a}")
print(f"c3a = {c3a}")
x3a = backwardSubstitution(U3a, c3a)
print(f"x3a = {x3a}")
U3a =
[[ 1.e+00 1.e+15]
[ 0.e+00 -1.e+15]]
c3a = [ 1.e+15 -1.e+15]
x3a = [1. 1.]
(Avoiding small denominators)
The first equation in Example 2.3 can be divided by \(10^{16}\) to get an equivalent system with the same problem:
Now the problem is more obvious: this system differs from the system in Example 2.1 just by a tiny change of \(10^{-16}\) in that pivot elements \(a_{11}\), and the problem is division by a value very close to zero.
A4 = array([[1e-16, 1.], [1. , 1.]])
b4 = array([1. + 1e-16, 2.])
print(f"A4 = \n{A4}")
print(f"b4 = {b4}")
A4 =
[[1.e-16 1.e+00]
[1.e+00 1.e+00]]
b4 = [1. 2.]
(U4, c4) = rowReduce(A4, b4)
print(f"U4 = \n{U4}")
print(f"c4 = {c4}")
x4 = backwardSubstitution(U4, c4)
print(f"x4 = {x4}")
U4 =
[[ 1.e-16 1.e+00]
[ 0.e+00 -1.e+16]]
c4 = [ 1.e+00 -1.e+16]
x4 = [2.22044605 1. ]
One might think that there is no such small denominator in Example 2.3, but what counts for being “small” is magnitude relative to other values — 1 is very small compared to \(10^{16}\).
To understand these problems more (and how to avoid them) we will explore Machine Numbers, Rounding Error and Error Propagation in the next section.
2.1.12. When naive Guassian elimination is safe: diagonal dominance#
There are several important cases when we can guarantee that these problem do not occur. One obvious case is when the matrix \(A\) is diagonal and non-singular (so with all non-zero elements); then it is already row-reduced and with all denominators in backward substitution being non-zero.
A useful measure of being “close to diagonal” is diagonal dominance:
(Strict Diagonal Dominance)
A matrix \(A\) is row-wise strictly diagonally dominant, sometimes abbreviated as just strictly diagonally dominant or SDD, if
Loosely, each main diagonal “dominates” in size over all other elements in its row.
(Column-wise Strict Diagonal Dominance)
If instead
(so that each main diagonal element “dominates its column”) the matrix is called column-wise strictly diagonally dominant.
Note that this is the same as saying that the transpose \(A^T\) is SDD.
Aside: If only the corresponding non-strict inequality holds, the matrix is called diagonally dominant.
For any strictly diagonally dominant matrix \(A\), each of the intermediate matrices \(A^{(k)}\) given by the naive Gaussan elimination algorithm is also strictly diagonally dominant, and so the final upper triangular matrix \(U\) is. In particular, all the diagonal elements \(a_{i,i}^{(k)}\) and \(u_{i,i}\) are non-zero, so no division by zero occurs in any of these algorithms, including the backward substitution solving for \(x\) in \(Ux = c\).
The corresponding fact also true if the matrix is column-wise strictly diagonally dominant: that property is also preserved at each stage in naive Guassian elimination.
Thus in each case the diagonal elements — the elements divided by in both row reduction and backward substitution — are in some sense safely away from zero. We will have more to say about this in the sections on pivoting and LU factorization
For a column-wise SDD matrix, more is true: at stage \(k\), the diagonal dominance says that the pivot elemet on the diagonal, \(a_{k,k}^{(k-1)}\), is larger (in magnitude) than any of the elements \(a_{i,k}^{(k-1)}\) below it, so the multipliers \(l_{i,k}\) have
As we will see when we look at the effects of rounding error in the sections on Machine Numbers, Rounding Error and Error Propagation and Error bounds for linear algebra keeping intermediate values small is generally good for accuracy, so this is a nice feature.
(Positive definite matrices)
Another class of matrices for which naive row reduction works well is positive definite matrices which arise in any important situations; that property is in some sense more natural than diagonal dominance. However that topic will be left for later.