4.2. Project on Minimizing Functions (template)#
Brenton LeMesurier, lemesurierb@charleston.edu
Version of November 20, 2025
4.2.1. Introduction#
The background to this project is seen in Chapter Optimization, and will start with the one dimensional case of minimizing a continuous function \(f:[a, b] \to \mathbb{R}\),
comparing two derivative-free methods: the Golden Section Search and Successive Parabolic Interpolation.
If there is time, subsequent goal will be to consider functions of more dimensions — for this project, two is enough: \(f:\mathbb{R}^2 \to \mathbb{R}\).
References:
[Sullivan, 2021] Section 3.4, Optimization.
[Sauer, 2022] Chapter 13, Optimization.
[Chenney and Kincaid, 2013] Chapter 13, Minimization of Functions.
[Kincaid and Chenney, 1990] Chapter 11, Optimization.
Exercise 4.12
Describe the golden section search method precisely in pseudocode.
Exercise 4.13
Implement the Golden Section Search and test it on a few examples, such as
\(f(x) = -x e^{-x}\) (whose minimum can be shown to occur at \(x=1\))
The Leonard-Jones potential \(\displaystyle f(x) = \frac{1}{x^{12}} - \frac{1}{x^6}\).
The second in particular requires preparatory work finding a suitable initial triplet.
Note that with a nice smooth “target function” \(f\) as here, one can get numerical confirmation of the result by checking that \(f'(x) \approx 0\).
To be careful, that only checks that it is a critical point; the second derivative test can confirm that it is a local minimum.
Exercise 4.14
Describe the method of Successive Parabolic Interpolation precisely in pseudocode.
Exercise 4.15
Implement the method of Successive Parabolic Interpolation and test it on a few examples, such as those in Exercise 4.13.
4.2.2. Minimizing Functions of Several Variables: Steepest Descent (a.k.a. Gradient Search)#
Reference: Finding the Minimum of a Function of Several Variables
The two key ideas here are that:
A function \(f:\mathbb{R}^n \to \mathbb{R}\) decreases most rapidy from a point \(\mathbf{x_0}\) in the direction opposite its gradient: \(-\nabla f(\mathbf{x_0})\).
We can search for the lowest value in that “downhill direction” by using a method for minimzing a function of one variable.
The basic strategy is that, starting at an approximation \(\mathbf{x}^{(k)}\), one looks along the “downhill ray” \(g_k(t) = \mathbf{x}^{(k)} - t \nabla f(\mathbf{x}^{(k)} )\), \(t \geq 0\), to find the value \(t_k\) of \(t\) which minimizes this auxilliary function, giving the improved aproximation
This minimization can be done by one of the methods in Finding the Minimum of a Function of One Variable Without Using Derivatives such as the Golden Section Search.
Some details need to be addressed, like the fact that in principle one has to search the infinite domain \(t \in [0, \infty)\) rather than a closed interval like \(t \in [0,T]\).
Thus some experimentation is needed.
Exercise 4.16
Implement the method of Steepest Descent, and test it initially on an example like
with an obvious, unique minimum.
Exercise 4.17
Test the above on some more challenging examples like
with minimum at \(x=y=1\).
Start with \(K = 1\); the try with a larger value like \(K = 100\), which makes it harder — a graph might reveal why.
Exercise 4.18
It can also be interesting to explore a function that has multiple local minima, like