1.1. Root Finding by Interval Halving (Bisection)#
Last revised on August 26, 2024
References:
Section 1.1 The Bisection Method in Numerical Analysis by Sauer [Sauer, 2022]
Section 2.1 The Bisection Method in Numerical Analysis by Burden, Faires and Burden [Burden et al., 2016]
Section 3.1 Bisction Method of [Chenney and Kincaid, 2012]
(See the Bibliography.)
1.1.1. Introduction#
One of the most basic tasks in numerical computing is finding the roots (or “zeros”) of a function — solving the equation \(f(x) = 0\) where \(f:\mathbb{R} \to \mathbb{R}\) is a continuous function from and to the real numbers. As with many topics in this course, there are multiple methods that work, and we will often start with the simplest and then seek improvement in several directions:
reliability or robustness — how good it is at avoiding problems in hard cases, such as division by zero.
accuracy and guarantees about accuracy like estimates of how large the error can be — since in most cases, the result cannot be computed exactly.
speed or cost — often d by minimizing the amount of arithmetic involved, or the number of times that a function must be evaluated.
\(x = \cos x\))
(SolveThis is a simple equation for which there is no exact formula for a solution, but we can easily ensure that there is a solution, and moreover, a unique one. It is convenient to put the equation into “zero-finding” form \(f(x) = 0\), by defining
Also, note that \(|\cos x| \leq 1\), so a solution to the original equation must have \(|x| \leq 1\). So we will start graphing the function on the interval \([a, b] = [-1, 1]\).
(On Python)
This is our first use of two Python packages that you might not have seen before: Numpy and Matplotlib. If you want to learn more about them, see for example the Python Tutorial sections on Python variables, lists, tuples, and Numpy arrays and graphing with Matplotlib
Or for now, just learn from the examples here.
# We will often need resources from the modules numpy and pyplot:
import numpy as np
import matplotlib.pyplot as plt
# We can also import items from a module individually, so they can be used by "first name only".
# This will often be done for mathematical functions.
from numpy import cos
def f(x):
return x - cos(x)
a = -1; b = 1
x = np.linspace(a, b) # 50 equally spaced values from a to b
plt.figure(figsize=(12,6)) # Create an "empty" graph, 12 wide, 6 high
plt.plot(x, f(x))
plt.plot([a, b], [0, 0], 'g') # Mark the x-axis in green
plt.grid(True); # Add a graph paper background
# If you want to see what "linspace" gives, run this cell
print(x)
[-1. -0.95918367 -0.91836735 -0.87755102 -0.83673469 -0.79591837
-0.75510204 -0.71428571 -0.67346939 -0.63265306 -0.59183673 -0.55102041
-0.51020408 -0.46938776 -0.42857143 -0.3877551 -0.34693878 -0.30612245
-0.26530612 -0.2244898 -0.18367347 -0.14285714 -0.10204082 -0.06122449
-0.02040816 0.02040816 0.06122449 0.10204082 0.14285714 0.18367347
0.2244898 0.26530612 0.30612245 0.34693878 0.3877551 0.42857143
0.46938776 0.51020408 0.55102041 0.59183673 0.63265306 0.67346939
0.71428571 0.75510204 0.79591837 0.83673469 0.87755102 0.91836735
0.95918367 1. ]
The above graph shows that the zero lies between 0.5 and 0.75, so zoom in:
a = 0.5; b = 0.75
x = np.linspace(a, b)
plt.figure(figsize=(12,6))
plt.plot(x, f(x))
plt.plot([a, b], [0, 0], 'g')
plt.grid(True);
And we could repeat, geting an approximation of any desired accuracy.
However this has two weaknesses: it is very inefficient (the function is evaluated about fifty times at each step in order to draw the graph), and it requires lots of human intervention.
To get a procedure that can be efficiently implemented in Python (or another programming language), we extract one key idea here: finding an interval in which the function changes sign, and then repeatedly find a smaller such interval within it. The simplest way to do this is to repeatedly divide an interval known to contain the root in half and check which half has the sign change in it.
Graphically, let us start again with interval \([a, b] = [-1, 1]\), but this time focus on three points of interest: the two ends and the midpoint, where the interval will be bisected:
a = -1
b = 1
c = (a+b)/2
acb = np.array([a, c, b])
plt.figure(figsize=(12,6))
plt.plot(acb, f(acb), 'b*')
# And just as a visual aid:
x = np.linspace(a, b)
plt.plot(x, f(x), 'b-.')
plt.plot([a, b], [0, 0], 'g')
plt.grid(True);
(On Python)
Note on line 3 above that the function cos
from Numpy (full name numpy.cos
) can be evaluated simultaneously on a list of numbers; the version math.cos
from module math
can only handle one number at a time.
This is one reason why we will avoid module math
in favor of numpy
.
\(f(a)\) and \(f(c)\) have the same sign, while \(f(c)\) and \(f(b)\) have opposite signs, so the root is in \([c, b]\); update the a, b, c values and plot again:
a = c # new left end is old center
b = b # redundant, as the right end is unchanged
c = (a+b)/2
print(f"{a=}, {b=}, {c=}")
a=0.0, b=1, c=0.5
acb = np.array([a, c, b])
x = np.linspace(a, b)
plt.figure(figsize=(12,6))
plt.plot(acb, f(acb), 'b*', x, f(x), 'b-.')
plt.plot([a, b], [0, 0], 'g')
plt.grid(True);
Again \(f(c)\) and \(f(b)\) have opposite signs, so the root is in \([c, b]\), and …
a = c # new left end is old center again
# skipping the redundant "b = b" this time
c = (a+b)/2
print(f"{a=}, {b=}, {c=}")
a=0.5, b=1, c=0.75
acb = np.array([a, c, b])
x = np.linspace(a, b)
plt.figure(figsize=(12,6))
plt.plot(acb, f(acb), 'b*', x, f(x), 'b-.')
plt.plot([a, b], [0, 0], 'g')
plt.grid(True);
This time \(f(a)\) and \(f(c)\) have opposite sign, so the root is at left, in \([a, c]\):
# this time, the value of a does not need to be updated ...
b = c # ... and the new right end is the former center
c = (a+b)/2
print(f"{a=}, {b=}, {c=}")
a=0.5, b=0.75, c=0.625
acb = np.array([a, c, b])
x = np.linspace(a, b)
plt.figure(figsize=(12,6))
plt.plot(acb, f(acb), 'b*', x, f(x), 'b-.')
plt.plot([a, b], [0, 0], 'g')
plt.grid(True);
1.1.2. A first algorithm for the bisection method#
Now it is time to dispense with the graphs, and describe the procedure in mathematical terms:
if \(f(a)\) and \(f(c)\) have opposite signs, the root is in interval \([a, c]\), which becomes the new version of interval \([a, b]\).
otherwise, \(f(c)\) and \(f(b)\) have opposite signs, so the root is in interval \([c, b]\)
Pseudo-code for describing algorithms#
As a useful bridge from the mathematical desciption of an algorithm with words and formulas to actual executable code, these notes will often describe algorithms in pseudo-code — a mix of words and mathematical formulas with notation that somewhat resembles code in a language like Python.
This is also preferable to going straight to code in a particular language (such as Python) because it makes it easier if, later, you wish to implement algorithms in a different programming language.
Note well one feature of the pseudo-code used here: assignment is denoted with a left arrow:
\(x \leftarrow a\)
is the instruction to cause the value of variable x
to become the current value of a.
This is to distinguish from
\(x = a\)
which is a comparison: the true-or-false assertion that the two quantities already have the same value.
Unfortunately however, Python (like most programming languages) does not use this notation:
instead assignment is done with x = a
so that asserting equality needs a different notation:
this is done with x == a
; note well that double equal sign!
Also, the pseudo-code marks the end of blocks like if
, for
and while
with a line end
.
Many programming languages do something like this, but Python does not:
instead it uses only the end of indentation as the indication that a block is finished.
With those notational issues out of the way, the key step in the bisection strategy is the update of the interval:
(one step of bisection)
\(\displaystyle c \leftarrow \frac{a + b}{2}\)
if \(f(a) f(c) < 0\) then:
\(\quad\) \(b \leftarrow c\)
else:
\(\quad\) \(a \leftarrow c\)
end
This needs to be repeated a finite number of times, and the simplest way is to specify the number of iterations. (We will consider more refined methods soon.)
(bisection, first version)
Get an initial interval \([a, b]\) with a sign-change: \(f(a) f(b) < 0\).
Choose \(N\), the number of iterations.
for i from 1 to N:
\(\quad\) \(\displaystyle c \leftarrow \frac{a + b}{2}\)
\(\quad\) if \(f(a) f(c) < 0\) then:
\(\quad\)\(\quad\) \(b \leftarrow c\)
\(\quad\) else:
\(\quad\)\(\quad\) \(a \leftarrow c\)
\(\quad\) end
endThe approximate root is the final value of \(c\).
A Python version of the iteration is not a lot different:
for i in range(N):
c = (a+b)/2
if f(a) * f(c) < 0:
b = c
else:
a = c
(For more about for
loops in Python, see the Python Tutorial section on iteration with for)
Exercise 1#
Create a Python function bisection1
which implements the first algorithm for bisection above, performing a fixed number iterations
of iterations.
(The iteration count was called \(N\) in the mathematical description, but in code it is encouraged to use descriptive names.)
The usage should be:
root = bisection1(f, a, b, iterations)
Test it with the above example: \(f(x) = x - \cos x = 0\), \([a, b] = [-1, 1]\)
(For more about defining and use of functions in Python, see the Python Tutorial section on defining and using Python functions)
1.1.3. Error bounds, and a more refined algorithm#
The above method of iteration for a fixed number of times is simple, but usually not what is wanted in practice. Instead, a better goal is to get an approximation with a guaranteed maximum possible error: a result consisting of an approximation \(\tilde{r}\) to the exact root \(r\) and also a bound \(E_{max}\) on the maximum possible error; a guarantee that \(|r - \tilde{r}| \leq E_{max}\). To put it another way, a guarantee that the root \(r\) lies in the interval \([\tilde{r} - E_{max}, \tilde{r} + E_{max}]\).
In the above example, each iteration gives a new interval \([a, b]\) guaranteed to contain the root, and its midpoint \(c = (a+b)/2\) is with a distance \((b-a)/2\) of any point in that interval, so at each iteration, we can have:
\(\tilde{r}\) is the current value of \(c = (a+b)/2\)
\(E_{max} = (b-a)/2\)
1.1.4. Error tolerances and stopping conditions#
The above algorithm can passively state an error bound, but it is better to be able to solve to a desired degree of accuracy; for example, if we want a result “accurate to three decimal places”, we can specify \(E_{max} \leq 0.5 \times 10^{-3}\).
So our next goal is to actively set an accuracy target or error tolerance \(E_{tol}\) and keep iterating until it is met.
This can be achieved with a while
loop; here is a suitable algorithm:
(bisection with error tolerance)
Input function \(f\), interval endpoints \(a\) and \(b\), and an error tolerance \(E_{tol}\)
Evaluate \(E_{max} = (b-a)/2\)
while \(E_{max} > E_{tol}\):
\(\quad c \leftarrow (a+b)/2\)
\(\quad\) if \(f(a) f(c) < 0\) then:
\(\quad\quad b \leftarrow c\)
\(\quad\) else:
\(\quad\quad a \leftarrow c\)
\(\quad\) end
\(\quad E_{max} \leftarrow (b-a)/2\)
endOutput \(\tilde{r} = c\) as the approximate root and \(E_{max}\) as a bound on its absolute error.
(To learn about while
loops in Python, see the Python Tutorial section on iteration with while)
1.1.5. Further Exercises#
Exercise 2#
Create a Python function implementing this better algorithm, with usage
(root, errorBound) = bisection2(f, a, b, errorTolerance)
(As above, I have changed the names of the error tolerance and error bound from \(E_{tol}\) and \(E_{max}\),
both to be more descriptive and to avoid subscripts.
Note also the “camel case” style: concatenating words [since spaces are not allowed in names] and capitalizing each new word.
Another popular option is to replace each space in a descriptive phrase by the underscore “_”, as with error_tolerance
.)
Test it with the above example: \(f(x) = x - \cos x\), \([a, b] = [-1, 1]\), this time accurate to within \(10^{-4}\).
Use the fact that there is a solution in the interval \((-1, 1)\).
Exercise 3#
Consider the equation \(x^5 = x^2 + 10\).
a) Find an interval \([a,b]\) of length one in which there is guaranteed to be a root.
b) Compute the next two improved approximations given by the bisection method.
c) Determine how many iterations of the bisection method would then be needed to approximate the root with an absolute error of at most \(10^{-10}\).
Do this without actually computing those extra iterations or computing the root!