Richardson Extrapolation
Contents
21. Richardson Extrapolation#
References:
Section 5.1.3 Extrapolation of Sauer
Section 4.2 Richardson Extrapolation of Burden&Faires
Section 4.2 Estimating Derivatives and Richardson Extrapolation of Chenney&Kincaid
21.1. Motivation#
With derivative approximations like
and
there are limits on the ability to improve accuracy by simply using a smaller value of \(h\): one is that rounding error become problematic.
Another is that when we are approximating the derivative at a collection of points in an interval \([a,b]\), \(x_i = a + ih\), \(0 \leq i \leq n\), \(h = \displaystyle\frac{b-a}{n}\), reducing \(h\) requires increasing the number of points \(n+1\), and so increases the “cost” (time and other resources needed) of the calculation.
Thus we would like to produce new approximation formulas of higher order \(p\); that is, with error \(O(h^p)\) for \(p\) greater than the values \(p=1\) for \(\Delta_h f(x)\) or \(p=2\) for \(\delta_h^2 f(x)\).
21.2. Procedure#
The general framework for this is an exact quantity \(Q_0\) for which we have an approximation formula \(Q(h)\) with
and we wish to achieve adequate accuracy while keeping \(h\) as large as possible.
The kernel of the idea is to initially ignore the smaller part of the error, \(O(h^q)\) and just consider
and evaluate for two values of \(h\); most often either \(h\) and \(2h\) (or \(h\) and \(h/2\), which ismore or less equivalent.)
That gives
and with only \(Q_0\) and \(c_p\) unknown, this is two (approximate) linear equations in two unknowns, so we can solve for the desired quantity \(Q_0\) by basic Gaussian elimination. This gives
But is this new approximation any better than the original? Using the more complete error formula above for \(Q(h)\) and its version with \(h\) replaced by \(2h\),
one gets
so indeed an improvement, since \(q>p\).
21.2.1. Addendum: rewriting to get an error estimate#
We can get a useful practical error estimate by rewriting the above result as
so that the quantity
is approximately the error in \(Q(h)\). Thus,
Richardson extrapolation can be viewed as “correcting” \(Q_h\) by subtracting of this estimated error:
This magnitude \(|E_h|\) of this error estimate can be used as a (typically pessimistic!) estimate of the error in the correted result \(Q_q\). Sometimes makes sens to use an even more cautious error estimate, by discarding the denominator \(2^p-1\): using \(|Q(h) - Q(2h)|\) as an estimate of the error in the extrapolated value \(Q_q\).
Either way, these follow the pervasive pattern of using the change between the two most recent approximations as an error estimate.
Note the analogy to Newton’s method for solving \(f(x)=0\), which can be broken into the two steps
estimate the error in approximate root \(x_n\) as \(E_n := -f(x_n)/f'(x_n)\)
update the approximation to \(x_{n+1} = x_n + E_n\).
Finally, note that this is always extrapolation, in the sense of “going beyond”: the new approximation is on the opposite side of the better of the original approximations from the less accurate of them.
21.2.2. Example#
For the basic forward difference approximation above, this process give a three-point method of second order accuracy (\(q=2\)):
21.2.3. Exercise 1(a)#
Apply Richardson extrapolation to the standard three-point, second order accurate approximation \(Q(h) := \delta_h^2 f(x)\) of the second derivative \(Q_0 := D^2f(x)\) as given above, and verify that it gives a fourth-order accurate five-point approximation formula.
21.2.4. Exercise 1(b)#
As a supplementary exercise, one could verify the order of accuracy directly with Taylor polynomials, or verify that the new formula has degree of precision \(d=5\), and hence is of order \(p=4\) due to the formula \(d = p+k-1\) for approximations of \(k\)-th derivatives, given in the notes for Day 11.
One could also derive the same formula “from scratch” using the Method of Undetermined Coefficients.
21.2.5. Exercise 2#
Apply Richardson extrapolation to the above one-sided three-point, second order accurate approximation of the derivative \(Df(x)\), and verify that it gives a third-order accurate four-point approximation formula.
But note something strange about this new formula: it skips \(f(x+3h)\).
Here, instead of extrapolating, one is probably better off applying the Method of Undetermined Coefficients directly with data \(f(x)\), \(f(x+h)\), \(f(x+2h)\), \(f(x+3h)\) and \(f(x+4h)\): what order of accuracy does that give?
21.3. A variant, more useful for integration and ODE boundary value problems: parameter \(n\)#
A slight variant of the above is approximation with an integer parameter \(n\), such as approximations of integrals by the (composite) trapezoid rule with \(n\) intervals, \(T_n\), or the approximate solution of an ordinary differential equation at the above-described collection of \(n+1\) equally spaced values in domain \([a,b]\). Then a more natural ntito of teh approxatio formula is \(Q_n\) instead of \(Q(h)\).
The errors of the form \(c_p h^p + O(h^q)\) become
The main difference is that to work with integer values of \(n\), it must be the quantity that is doubled, whereas doubling of \(h\) would correspond to halving of \(n\).
The extrapolation formula becomes
Aside: For the slightly more general case of increasing from \(n\) to \(k n\), one gets
21.3.1. A common verbal description for both forms#
This can be summarized with the same verbal form as the original formula:
“\(2^p\) times the more accurate approximation,
minus the less accurate approximation,
all divided by \((2^p - 1)\)”
Also
“The error in the more accurate approximation is approximated by the difference between the two approximations, divided by \((2^p - 1)\)”
21.3.2. Addendum: Rewriting to get an error estimate, again#
As with the “\(h\)” form above, this extrapolation can be broken into two steps
so \(E_{2n}\) estimates the error in \(Q_{2n}\), and the improved approxmation can be expressed as
21.4. Repeated Richardson extrapolation#
The new improved approximation formulas have the same sort of error formula, but for order \(q\) instead of order \(p\), so we could extrapolate again to get an even higher order method, and this can be done numerous times if there is a suitable power series in \(h\) or \(1/n\) for the errors.
That is not so useful for derivative approximations, where one can get the same or better results with the method of underermined coefficients, but can be very useful for integration methods, and for the related task of solving boundary value problems for ordinary differential equations.
For example, it can be applied to the composite trapezoid rule, giving the composite Simpson’s rule at the first step, and then a succession of approximations of ever higher order – this is known as the Romberg method.
Repeated Richardson extrapolation can also be applied to the approximate solution of dofferential equations; we might explore that later.
This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International