Exercises on Newton’s Method

6. Exercises on Newton’s Method#

Exercise 1#

a) Show that Newton’s method applied to

\[ f(x) = x^k - a \]

leads to fixed point iteration with function

\[ g(x) = \frac{(k-1) x + \displaystyle \frac{a}{x^{k-1}}}{k}. \]

b) Then verify mathematically that the iteration \(x_{k+1} = g(x_k)\) has super-linear convergence.

Exercise 2#

a) Create a Python function for Newton’s method, with usage

(root, errorEstimate, iterations, functionEvaluations) = newton(f, Df, x_0, errorTolerance, maxIterations)

(The last input parameter maxIterations could be optional, with a default like maxIterations=100.)

b) based on your function bisection2 create a third (and final!) version with usage

(root, errorBound, iterations, functionEvaluations) = bisection(f, a, b, errorTolerance, maxIterations)

c) Use both of these to solve the equation

\[ f_1(x) = 10 - 2x + \sin(x) = 0 \]

i) with [estimated] absolute error of no more than \(10^{-6}\), and then

ii) with [estimated] absolute error of no more than \(10^{-15}\).

Note in particular how many iterations and how many function evaluations are needed.

Graph the function, which will help to find a good starting interval \([a, b]\) and initial approximation \(x_0\).

d) Repeat, this time finding the unique real root of

\[ f_2(x) = x^3 - 3.3 x^2 + 3.63 x - 1.331 = 0 \]

Again graph the function, to find a good starting interval \([a, b]\) and initial approximation \(x_0\).

e) This second case will behave differently than for \(f_1\) in part (c): describe the difference. (We will discuss the reasons in class.)