2.4. Solving \(Ax = b\) with LU factorization, \(A = L U\)#

Last Revised September 23, 2024

References:

2.4.1. Avoiding repeated calculation, excessive rounding and messy notation: LU factorization#

Putting aside pivoting for a while, there is another direction in which the algorithm for solving linear systems \(Ax=b\) can be improved. It starts with the idea of being more efficient when solving multiple system with the same right-hand side: \(A x^{(m)} = b^{(m)}, m=1,2, \dots\).

However it has several other benefits:

  • allowing a strategy to reduce rounding error, and

  • a simpler, more elegant mathematical statement.

We will see how to merge this with partial pivoting in Solving Ax = b With Both Pivoting and LU factorization.

Some useful jargon:

Definition 2.5 (Triangular matrix)

A matrix is triangular if all its non-zero elements are either on the main diagonal or to one side of it. There are two possibilities:

  • Matrix \(U\) is upper triangular if \(u_{ij} = 0\) for all \(i > j\).

  • Matrix \(L\) is lower triangular if \(l_{ij} = 0\) for all \(j > i\).

One important example of an upper triangular matrix is \(U\) formed by row reduction; note well that it is much quicker and easier to solve \(Ux = c\) than the original system \(Ax=b\) exactly because of its triangular form.

We will soon see that the multipliers \(l_{ij}\), \(i > j\) for row reduction that were introduced in Row Reduction (Gaussian Elimination) help to form a very useful lower triangular matrix \(L\).

The key to the LU factorization idea is finding a lower triangular matrix \(L\) and an upper triangular matrix \(U\) such that \(L U = A\), and then using the fact that it is far quicker to solve a linear system when the corresponding matrix is triangular.

Indeed we will see that, if naive Gaussian elimination for \(Ax=b\) succeeds, giving row-reduced form \(Ux = c\):

  1. The matrix \(A\) can be factorized as \(A = LU\) with \(U\) an \(n \times n\) upper triangular matrix and \(L\) an \(n \times n\) lower triangular matrix.

  2. There is a unique such factorization with the further condition that \(L\) is unit lower triangular, which means the extra requirement that the value on its main diagonal are unity: \(l_{k,k} = 1\). This is called the Doolittle Factorization of \(A\).

  3. In the Doolittle factorization, the matrix \(U\) is the one given by naive Gaussian elimination, and the elements of \(L\) below its main diagonal are the multipliers arising in naive Gaussian elimination. (The other elements of \(L\), on and above the main diagonal, are the ones and zeros dictated by it being unit lower triangular: the same as for those elements in the \(n \times n\) identity matrix.)

  4. The transformed right-hand side \(c\) arising from naive Gaussian elimination is the solution of the system \(Lc = b\), and this is solvable by an procedure caled forward substitution, very similar to the backward subsitution used to solve \(Ux = c\).

Putting all this together: if naive Gaussian elimination works for \(A\), we can introduce the name \(c\) for \(Ux\), and note that \(Ax = (LU)x = L(Ux) = Lc = b\). Then solving of the system \(Ax = b\) can be done in three steps:

  1. Using \(A\), find the Doolittle factors, \(L\) and \(U\).

  2. Using \(L\) and \(b\), solve \(Lc = b\) to get \(c\). (Forward substitution)

  3. Using \(U\) and \(c\), solve \(Ux = c\) to get \(x\). (Backward substitution)

2.4.2. The direct method for the Doolittle LU factorization#

If you believe the above claims, we already have one algorithm for finding an LU factorization; basically, do naive Gaussian elimination, but ignore the right-hand side \(b\) until later. However, there is another “direct” method, which does not rely on anything we have seen before about Gaussian elimination, and has other advantages as we will see.

(If I were teaching linear algebra, I would be tempted to start here and skip Gaussian Elimination!)

This method starts by considering the apparently daunting task of solving the \(n^2\) simultaneous and nonlinear equations for the initially unknown elements of \(L\) and \(U\):

\[ \sum_{k=1}^n l_{i,k} u_{k,j} = a_{i,j}\; 1 \leq i \leq n,\;1 \leq j \leq n. \]

The first step is to insert the known information; the already-known values of elements of \(L\) and \(U\). For one thing, the sums above stop when either \(k=i\) or \(k=j\), whichever comes first, due to all the zeros in \(L\) nd \(U\):

\[ \sum_{k=1}^{\min(i,j)} l_{i,k} u_{k,j} = a_{i,j}\; 1 \leq i \leq n,\;1 \leq j \leq n. \]

Next, when \(i \leq j\)— so that the sum ends at \(k=i\) and involves \(l_{i,i}\) — we can use \(l_{i,i} = 1\).

So break up into two cases:

On and above the main diagonal (\(i \leq j\), so \(\min(i,j) = i\)):

\[\sum_{k=1}^{i-1}l_{i,k} u_{k,j}+u_{i,j}=a_{i,j} \quad 1\leq i \leq n,\; i \leq j \leq n.\]

Below the main diagonal (\(i > j\), so \(\min(i,j) = j\)):

\[\sum_{k=1}^{j-1} l_{i,k} u_{k,j} + l_{i,j} u_{j,j}= a_{i,j} \quad 2 \leq i \leq n,\;1 \leq j \leq i.\]

In each equation, the last term in the sum has been separated, so that we can use them to “solve” for an unknown:

\[\begin{split} \begin{split} u_{i,j} &= a_{i,j} - \sum_{k=1}^{i-1} l_{i,k} u_{k,j} \quad 1 \leq i \leq n,\;i \leq j \leq n. \\ l_{i,j} &= \frac{a_{i,j} - \sum_{k=1}^{j-1} l_{i,k} u_{k,j}}{u_{j,j}} \quad 2 \leq i \leq n,\;1 \leq j \leq i. \end{split} \end{split}\]

Here comes the characteristic step that gets us from valid equations to a useful algorithm: we can arrange these equations in an order such that all the values at right are determined by an earlier equation!

First look at what they say for the first row and first column.

With \(i=1\) in the first equation, there is no sum, and so:

\[ u_{1,j}=a_{1,j}, \quad 1 \leq j \leq n, \]

which is the familiar fact that the first row is unchanged in naive Gaussian elimination.

Next, with \(j=1\) in the second equation, there is again no sum:

\[ l_{i,1} = \frac{a_{i,1}}{u_{1,1}}, = \frac{u_{i,1}}{u_{1,1}}, \quad 2 \leq i \leq n, \]

which is indeed the multipliers in the first step of naive Gaussian elimination.

Remember that one way to think of Gaussian elimination is recursively: after step \(k\), one just applies the same process recursively to the smaller \(n-k \times n-k\) matrix in the bottom-right-hand corner. We can do something similar here; at stage \(k\):

  1. First use the first of the above equations to solve first for row \(k\) of \(U\), meaning just \(u_{k,j}, j \geq k\),

  2. Then use the second equation to solve for column \(k\) of \(L\): \(l_{i,k}, i > k\).

Algorithm 2.7 (Doolittle factorization)

Stage \(k=1\) is handled by the simpler special equations above, and for the rest:

for k from 2 to n
\(\quad\) for j from k to n \(\qquad\) Get the non-zero elements in row \(k\) of \(U\)
\(\quad\quad\) \(u_{k,j}=a_{k,j} - \sum_{s=1}^{k-1}l_{k,s}u_{s,j}\)
\(\quad\) end
\(\quad\) for i from k+1 to n \(\qquad\) Get the non-zero elements in column \(k\) of \(L\) (except the 1’s on its diagonal)
\(\quad\quad\) \(l_{i,k}=\displaystyle\frac{a_{i,k}-\sum_{s=1}^{k-1}l_{i,s}u_{s,k}}{u_{k,k}}\)
\(\quad\) end
end

Note well that in the formulas to evaluate at the right,

  1. The terms \(l_{k,s}\) are for \(s < k\), so from a column \(s\) that has already been computed for a previous \(k\) value.

  2. The terms \(u_{s,j}\) are for \(s < k\), so from a row \(s\) that has already been computed for a previous \(k\) value.

  3. The denominator \(u_{k,k}\) in the second inner loop is computed just in time, in the first inner loop for the same \(k\) value.

So the only thing that can go wrong is the same as before: a zero pivot element \(u_{k,k}\).

Remark 2.18 (On this algorithm)

  1. For \(k=n\), the second inner loop is redundant, so could be eliminated. Indeed it might need to be eliminated in actual code, where “empty loops” might not be allowed. On the other hand, allowing empty loops makes the above correct also for \(k=1\); then the for k loop encompases the entire factorization algorithm.

  2. This direct factorization algorithm avoids any intermediate modification of arrays, and thus eliminates all those superscripts like \(a_{i,j}^{(k)}\). This is not only nicer mathematically, but can help to avoid mistakes like code that inadvertently modifies the array containing the matrix \(A\) and then uses it to compute the residual, \(b - Ax\). More generally, such purely mathematical statements of algorithms can help to avoid coding errors; this is part of the philosophy of the functional programming approach.

  3. Careful examination shows that the product \(l_{k,s} u_{s,j}\) that is part of what is subtracted at location \((k,j)\) is the same as what is subtracted there at stage \(k\) of Gaussian elimination, just with different names. More generally, every piece of arithmetic is the same as before, except arranged in a different order, so that the \(k-1\) changes made to an element in row \(k\) are done together, via those sums.

  4. Rewrites with zero-based indexing will be provided later.

# Import some items from modules individually, so they can be used by "first name only".
from numpy import array, zeros_like, identity
def lu_factorize(A, demoMode=False):
    """Compute the Doolittle LU factorization of A.
    Sums like $\sum_{s=1}^{k-1} l_{k,s} u_{s,j}$ are done as matrix products;
    in the above case, row matrix L[k, 1:k-1] by column matrix U[1:k-1,j] gives the sum for a give j,
    and row matrix L[k, 1:k-1] by matrix U[1:k-1,k:n] gives the relevant row vector.
    """
    n = len(A)  # len() gives the number of rows in a 2D array.
    # Initialize U as the zero matrix;
    # correct below the main diagonal, with the other entries to be computed below.
    U = zeros_like(A)
    # Initialize L as the identity matrix;
    # correct on and above the main diagonal, with the other entries to be computed below.
    L = identity(n)
   # Column and row 1 (i.e Python index 0) are special:
    U[0,:] = A[0,:]
    L[1:,0] = A[1:,0]/U[0,0]
    if demoMode:
        print(f"After step k=0")
        print(f"U=\n{U}")
        print(f"L=\n{L}")
    for k in range(1, n-1):
        U[k,k:] = A[k,k:] - L[k,:k] @ U[:k,k:]
        L[k+1:,k] = (A[k+1:,k] - L[k+1:,:k] @ U[:k,k])/U[k,k]
        if demoMode:
            print(f"After step {k=}")
            print(f"U=\n{U}")
            print(f"L=\n{L}")
    # The last row (index "-1") is special: nothing to do for L
    U[-1,-1] = A[-1,-1] - sum(L[-1,:-1]*U[:-1,-1])
    if demoMode:
        print(f"After the final step, k={n-1}")
        print(f"U=\n{U}")
    return (L, U)

A test case on LU factorization#

A = array([[4, 2, 7], [3, 5, -6],[1, -3, 2]], dtype=float)
print(f"A=\n{A}")
A=
[[ 4.  2.  7.]
 [ 3.  5. -6.]
 [ 1. -3.  2.]]
(L, U) = lu_factorize(A, demoMode=True)
After step k=0
U=
[[4. 2. 7.]
 [0. 0. 0.]
 [0. 0. 0.]]
L=
[[1.   0.   0.  ]
 [0.75 1.   0.  ]
 [0.25 0.   1.  ]]
After step k=1
U=
[[  4.     2.     7.  ]
 [  0.     3.5  -11.25]
 [  0.     0.     0.  ]]
L=
[[ 1.    0.    0.  ]
 [ 0.75  1.    0.  ]
 [ 0.25 -1.    1.  ]]
After the final step, k=2
U=
[[  4.     2.     7.  ]
 [  0.     3.5  -11.25]
 [  0.     0.   -11.  ]]
print(f"A=\n{A}")
print(f"L=\n{L}")
print(f"U=\n{U}")
print(f"L times U is \n{L@U}")
print(f"The 'residual' A - LU is \n{A - L@U}")
A=
[[ 4.  2.  7.]
 [ 3.  5. -6.]
 [ 1. -3.  2.]]
L=
[[ 1.    0.    0.  ]
 [ 0.75  1.    0.  ]
 [ 0.25 -1.    1.  ]]
U=
[[  4.     2.     7.  ]
 [  0.     3.5  -11.25]
 [  0.     0.   -11.  ]]
L times U is 
[[ 4.  2.  7.]
 [ 3.  5. -6.]
 [ 1. -3.  2.]]
The 'residual' A - LU is 
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]

Forward substitution: solving \(Lc = b\) for \(c\)#

This is the last piece missing. The strategy is very similar to backward substitution, but slightly simplified by the ones on the main didogonal of \(L\). The equations \(L c = b\) can be written much as above, separating off the last term in the sum:

\[ \sum_{j=1}^{n} l_{i,j} c_j = b_i,\; 1 \leq i \leq n \]
\[ \sum_{j=1}^{i} l_{i,j} c_j = b_i,\; 1 \leq i \leq n \]
\[ \sum_{j=1}^{i-1} l_{i,j} c_j + c_i = b_i,\; 1 \leq i \leq n \]

Then solve for \(c_i\):

\[ c_i = b_i - \sum_{j=1}^{i-1} l_{i,j} c_j \]

These are already is usable order: the right-hand side in the equation for \(c_i\) involves only the \(c_j\) values with \(j < i\), determined by earlier equations if we run through index \(i\) in increasing order.

First, \(i=1\)

\[ c_1 = b_1 - \sum_{j=1}^{0} l_{1,j} c_j, = b_1 \]

Next, \(i=2\)

\[ c_2 = b_2 - \sum_{j=1}^{1} l_{2,j} c_j, = b_2 - l_{2,1}c_1 \]

Next, \(i=3\)

\[ c_3 = b_3 - \sum_{j=1}^{2} l_{3,j} c_j, = b_3 - l_{3,1} c_1 - l_{3,2} c_2 \]

I leave as an exerise expressing this is pseudo-code (adjusted to zero-based indexing); here it is in Python; also available from the module numericalMethods with

from numericalMethods import forwardSubstitution

Exercise 1#

A) Express this forward substitution strategy as pseudo-code, adjusting to Python’s zero-based indexing. Spell out all the sums in explicit rather than using ‘\(\Sigma\)’ notation for sums any matrix multiplication short-cut.

B) Then implement it “directly” in a Python function, with format:

function forwardSubstitution(L, b)
    . . .
    return c

Again do this with explicit evaluation of each sum rather than using the function sum or any matrix multiplication short-cut.

C) Test it, using this often-useful “reverse-engineering” tactic:

  1. Create suitable test arrays L and c. (Use \(n\) at least three, and preferably larger.)

  2. Compute their product, with b = L * c

  3. Check if c_solution = forwardSubstitution(L, b) gives the correct value (within rounding error.)

As usual, there is also an implementation available from module numericalMethods, at … and forward substitution …, so this is used here. (It is not in the form asked for in the above exercise!)

A test case on forward substitution#

b = array([2., 3., 4.])
c = forwardSubstitution(L, b)
print(f"c = {c}")
print(f"The residual b - Lc is {b - L@c}")
print(f"\t with maximum norm {max(abs(b - L@c)):0.3}")
c = [2.  1.5 5. ]
The residual b - Lc is [0. 0. 0.]
	 with maximum norm 0.0

Completing the test case, with backward substitution#

As this step is unchanged, just import the version seen in a previous section.

from numericalMethods import backwardSubstitution
x = backwardSubstitution(U, c)
print(f"The residual c - Ux for the backward substitution step is {c - U@x}")
print(f"\t with maximum norm {max(abs(c - U@x)):0.3}")
print(f"The residual b - Ax for the whole solving process is {b - A@x}")
print(f"\t with maximum norm {max(abs(b - A@x)):0.3}")
The residual c - Ux for the backward substitution step is [0. 0. 0.]
	 with maximum norm 0.0
The residual b - Ax for the whole solving process is [ 0.0000000e+00 -4.4408921e-16  8.8817842e-16]
	 with maximum norm 8.88e-16

Exercise 2#

(An ongoing activity.)

Start building a Python module linearalgebra in a file linearalgebra.py, with all our linear algebra functions: for now, forwardSubstitution(L, b) as above and also rowReduce(A, b) and backwardSubstitution(U, c) from the section Row Reduction (Gaussian Elimination)

Include testing/demonstration at the bottom of the module definition file, in the block of the if statement

if __name__ == "__main__":

We will add the Doolittle method and such in a while, and could use this module in assignments and projects.

Creating modules#

One way to do this is with Spyder (or another Python IDE). However, if you prefer working primarily with JupyterLab and Jupyter notebooks, one way to create this module is to first put the function def’s and testing code in a notebook linearalgebra.ipynb and then convert that to linearalgebra.py with the JupyerLab menu command

File > Export Notebook As ... > Export Notebook to Executable Script

As an example of creating a module, I am creating one as we go along in this course, via the notebook Notebook for generating the module numericalMethods and Python code file numericalMethods.py derived from that, which defines module numericalMethods.

The notebook version is in the Appendices of this Jupyter book.

2.4.3. When does LU factorization work?#

It was seen in the section Partial Pivoting that naive Gaussian elimination works (in the sense of avoiding division by zero) so one good result is that

Theorem 2.3

Any SDD matrix has a Doolittle factorization \(A=LU\), with the diagonal elements of \(U\) all non-zero, so backward substitution also works.

For any column-wise SDD matrix, this LU factorization exists and is also “optimal”, in the sense that it follows what you would do with maximal element partial pivoting.

This nice second property can be got for SDD matrices via a twist, or actually a transpose.

For an SDD matrix, it transpose \(B = A^T\) is column-wise SDD and so has the nice Doolitle factorization described above: \(B = L_B U_B\), with \(L_B\) being column-wise diagonally dominant and having ones on the main diagonal.

Transposing back, \(A = B^T = (L_B U_B)^T = U_B^T L_B^T\), and defining \(L = U_B^T\) and \(U = L_B^T\),

  • \(L\) is lower triangular

  • \(U\) is upper triangular, row-wise diagonally dominant and with ones on it main diagonal: it is “unit upper triangular”.

  • Thus \(LU\) is another LU factorization of \(A\), with \(U\) rather than \(L\) being the factor with ones on its main diagonal.

2.4.4. Crout decomposition#

This sort of \(LU\) factorization is called the Crout decomposition; as with the Doolittle version, if such a factorization exists, it is unique.

Theorem 2.4

Every SDD matrix has a Crout decomposition, and the factor \(U\) is SDD.

Remark 2.19

As was mentioned at the end of the section Row Reduction (Gaussian Elimination), naive row reduction also works for positive definite matrices, and thus so does the Doolittle LU factorization. However, there is another LU factorization that works even better in that case, the Cholesky factorization; this topic might be returned to later.

2.4.5. Exercises (Coming Soon)#