6.8. Implicit Methods: Adams-Moulton#
References:
Section 6.7 Multistep Methods in [Sauer, 2022].
Section 5.6 Multistep Methods in [Burden et al., 2016].
6.8.1. Introduction#
So far, most methods we have seen give the new approximation value with an explicit formula for it in terms of previous (and so already known) values; the general explicit s-step method seen in Adams-Bashforth Multistep Methods was
However, we briefly saw two implict methods back in Runge-Kutta Methods, in the process of deriving the explicit trapezoid and explicit midpoint methods: the implicit trapezoid method (or just the trapezoid method, as this is the real thing, before the further approximations were used to get an explicit formula)
and the Implicit Midpoint Method
These are clearly not as simple to work with as explicit methods, but the equation solving can often be done. In particular for linear differential equations, these give linear equations for the unknown \(U_{i+1}\), so even for systems, they can be solved by the method seen earler in these notes.
Another strategy is noting that these are fixed point equations so that fixed point iterato can be used. The factor \(h\) at right helps; it can be shown that for small enough \(h\) (how small depends on the function \(f\)), these are contraction mappings and so fixed point iteration works.
This ide can be combined with linear multistep methods, and one important case is modifying the Adams-Bashforth method by allowing \(F_i = f(t_i, U_i)\) to appear at right: this gives the Adams-Moulton form
where the only change from Adams-Bashforth methods is that \(b_s\) term.
The coefficients can be derived much as for those, by the method of undetermined coefficients; one valuable difference is that there at now \(s+1\) undetermined coefficients, so all error terms up to \(O(h^s)\) can be cancelled and the error made \(O(h^{s+1})\): one degree higher.
The \(s=1\) case is familiar:
and as symmetry suggests, the solution is \(b_0 = b_1 = 1/2\), giving
which is the (implicit) trapzoid rule in the new shifted indexing.
This is much used for numerical solution of partial differential equations of evoluton type (after first approximating by large system of ordinary differnetial equations). In that context it is often known as the Crank-Nicholson method.
We can actualy start at \(s=0\); the first few Adams-Moulton methods are:
The use of \(F_{i-k}\) notation emphasizes that these earlier values of \(F_{i-k} = f(t_{i-k}, U_{i-k})\) are known from a previous step, so can be stored for reuse.
The backward Euler method has not been mentioned before; it comes from using the backward counterpart of the forward difference approximation of the derivative:
Like Euler’s method it is only first order accurate, but it has excellent stability properties, which makes it useful in some situations.
Rather than implementing any of these, the next section introduces a strategy for deriving explicit methods of comparable accuracy, much as in Runge-Kutta Methods Euler’s method (Adams-Bashforth \(s=1\)) was combined with the trapezoid method (Adams-Moulton \(s=1\)) to get the explicit trapezoid method: an explicit method with the same order of accuracy as the latter of this pair.
6.8.2. Exercises#
Coming soon.